Jump to content

If possible, I'd like to put into the Noteworthy section.

 

Hardware

CASE: Corsair 760T

PSU: Silverstone ST1500

MB: Asus Rampage IV Black​

CPU: Intel i7 4960X

HS: Corsair H110

RAM: 64GB DDR3 @ 2300MHz

RAID CARD 1: LSI 9361-8I

SSD: Samsung 850pro 240GB SSD

HDD 1: **

HDD 2: 6x 3TB Western Digital WD4000FYYZ

Software and Configuration:

My server/work station runs Windows 10pro.  I utilize LSI's onboard firmware to configure all 6 drives in RAID 6.  I end up with 14.5TB of actual storage.  My array partition is handled by Windows, with a  **1TB drive and another 13.5 for storage.

Usage:

I use the storage for movies, music, photos, and my dev work. I have a media PC that accesses it utilizing VLC over a gigabit network.  The 1TB drive is for development of flight simulators.

Backup:

Planning a backup soon.

Additional info:

I'm a graphic artist for the flight simulation industry.  I work from home, so I don't have access to a real server, per say.  My station is setup to run work\game settings at max and render textures when needed.  I used an available PCIe slot to add an array for storage.

 

I plan to build a render farm/storage server next with the RAID card I have now. This machine will be converted to that, once I get the Broadwell-E and nVidia's next gen graphics cards.  Those two items will be my new dedicated work station.

Photo's:

 

Xka24nm.jpg

rJnA2Ka.jpg

 

vEV3zT9.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

I wish I could be part of the LTT Petabyte Club, I'll probably be part of it eventually.

Hello ma, hello pa, I'm on the interwebs. -LazySpeck

Link to comment
Share on other sites

Link to post
Share on other sites

Finally getting an employee discount on drives at work. WD Red 6TB NAS drives for $175. As soon as I get all these pesky student loans paid off I will be buying a lot more drives. Hopefully by that time they will have added HGST drives to the store.

SSD Firmware Engineer

 

| Dual Boot Linux Mint and W8.1 Pro x64 with rEFInd Boot Manager | Intel Core i7-4770k | Corsair H100i | ASRock Z87 Extreme4 | 32 GB (4x8gb) 1600MHz CL8 | EVGA GTX970 FTW+ | EVGA SuperNOVA 1000 P2 | 500GB Samsung 850 Evo |250GB Samsung 840 Evo | 3x1Tb HDD | 4 LG UH12NS30 BD Drives | LSI HBA | Corsair Carbide 500R Case | Das Keyboard 4 Ultimate | Logitech M510 Mouse | Corsair Vengeance 2100 Wireless Headset | 4 Monoprice Displays - 3x27"4k bottom, 27" 1440p top | Logitech Z-2300 Speakers |

Link to comment
Share on other sites

Link to post
Share on other sites

03:12 am:

*UUU! This topic looks interesting! I'll just look at a few solutions people have come up with, then I'm going to bed."

05:10 am:

Well... We've all been here.

I have done that before too.

Hello ma, hello pa, I'm on the interwebs. -LazySpeck

Link to comment
Share on other sites

Link to post
Share on other sites

Here's my home server I built about a year ago :)

 

 

I'm Planning to get this case and I will put 4 hard drives in it

Is it quiet, does the drives make much vibrations ?

Link to comment
Share on other sites

Link to post
Share on other sites

I'm Planning to get this case and I will put 4 hard drives in it

Is it quiet, does the drives make much vibrations ?

 

Yes the case is very quiet. I sleep in the same room as my server without issue :)

The Define R5 has sound deadening panels, rubber mounts for hdds and 140mm fans. All contribute to reduce noise.

Link to comment
Share on other sites

Link to post
Share on other sites

Yes the case is very quiet. I sleep in the same room as my server without issue :)

The Define R5 has sound deadening panels, rubber mounts for hdds and 140mm fans. All contribute to reduce noise.

 

Great ! Planning to buy it soon

I have a loud case and I'm using the suspension method to reduce noise , it works great but I don't think it is safe

 

hdsilence_20080410.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Great ! Planning to buy it soon

I have a loud case and I'm using the suspension method to reduce noise , it works great but I don't think it is safe

Love it!

 

Should be pretty safe, if you don't throw the case around the entire room

Link to comment
Share on other sites

Link to post
Share on other sites

Great ! Planning to buy it soon

I have a loud case and I'm using the suspension method to reduce noise , it works great but I don't think it is safe

 

hdsilence_20080410.jpg

 

That's creative :)

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Does external storage count if you have a notebook? :D

 

Since i download everything that's some sort of anime i collectet quiet a few external's:

 

2x 3.5" SeaGate Barracuda 3TB

1x 3.5" SeaGate Barracuda 2TB

1x 3.5" Samsung Something 2TB

1x 2.5" Toshiba Something 2TB

1x 2.5" WD Scorpio Blue 320GB

2x 2.5" SeaGate Momentus XT 750GB (+8GB SSHD stuff)

1x mSata SSD Samsung 830 Series 250GB

 

14TB-ish~

 

Some of those droves like the Samsung one have allready 10'000h on the back and the Momentus ones 11'500h. :D

ᕙ(⇀‸↼‶)ᕗ

Link to comment
Share on other sites

Link to post
Share on other sites

Does external storage count if you have a notebook? :D

Unfortunately not, but you do have a nice collection there!

 

 

Some of those droves like the Samsung one have allready 10'000h on the back and the Momentus ones 11'500h. :D

 

Get on my level :D

[simon@[member='Pegasus] ~]$ for i in /dev/sd[a-h]; do sudo hddtemp $i; sudo smartctl -a $i | grep Power_On_Hours; done/dev/sda: ST2000DM001-1CH164: 26 C  9 Power_On_Hours          0x0032   088   088   000    Old_age   Always       -       11322/dev/sdb: ST2000DM001-1ER164: 22 C  9 Power_On_Hours          0x0032   099   099   000    Old_age   Always       -       1103/dev/sdc: WDC WD20EARX-00PASB0: drive is sleeping  9 Power_On_Hours          0x0032   076   076   000    Old_age   Always       -       17521/dev/sdd: WDC WD20EARX-00PASB0: drive is sleeping  9 Power_On_Hours          0x0032   078   078   000    Old_age   Always       -       16631/dev/sde: KINGSTON SVP200S37A60G: 29 C  9 Power_On_Hours_and_Msec 0x0032   000   000   000    Old_age   Always       -       16714h+03m+13.100s/dev/sdf: KINGSTON SV300S37A120G: 28 C  9 Power_On_Hours_and_Msec 0x0032   098   098   000    Old_age   Always       -       2293h+15m+41.120s/dev/sdg: WDC WD20EARS-00MVWB0: drive is sleeping  9 Power_On_Hours          0x0032   059   059   000    Old_age   Always       -       30628
Link to comment
Share on other sites

Link to post
Share on other sites

6773411.jpg

 

 

You dare facing me in ghetto cooling?

 

ceb5fb48_ba55_4058_9486_8962ca7a8a44.jpg

 

Had somehow to cool down those two drives since i'm in the middle of transferring 2,7TB of data to integrate that goddamn LaCie drive in my 4-Bay dock. :mellow:  This should be proof to everyone out there that 7200rpm drives need active cooling or they will go beyond 55°C after a few 100GB of R/W even in open air. Now they stay beneath 35°C.  :rolleyes:

ᕙ(⇀‸↼‶)ᕗ

Link to comment
Share on other sites

Link to post
Share on other sites

I just hope this is over soon.  :wacko: Just a little bit shacking and it disconnects because of that stupid SuperSpeed Micro B.  <_<

 

I also like Knex, played a lot with it as a kid!  :wub:  Shame that there is no high end kit made of carbonfiber, titanium and all that highend stuff to build pc's with it.

ᕙ(⇀‸↼‶)ᕗ

Link to comment
Share on other sites

Link to post
Share on other sites

Update 2:

 

Just a refresh with new hardware.

Skylake stuff !

 

Hardware:

Intel Core i3-6100 3.7 GHz

32GB - G.SKILL Ripjaws V Series 

ASRock H170M Mini-ITX

 

10441245_516055981901037_815941019770489

 

12391827_516056241901011_339188424026044

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

@alpenwasser @looney

 

 

CLICK HERE FOR ORIGINAL POST

CLICK HERE FOR UPDATE 1

CLICK HERE FOR UPDATE 2

CLICK HERE FOR UPDATE 4

 

 

 

UPDATE #3:

 

Upgraded my server, added 3 additional 4TB hard drives. Bringing my new total up to 47.8TB.

 

Also, one of the SSD's in my SSD Caching array managed to die. In the strangest way possible too, Powered the system down to connect my server/router/modem to a BBU, and when I booted back up, the SSD was dead.

 

The previous raid 1 cache array was made up of an OCZ vertex4 and a Crucial MX100. I'm just going to replace both drives with 120gb 850 EVOs since the write performance of 1 EVO is 4x that of the MX100. Still using the same size SSD's so that doesn't affect my storage total.

 

 

EDIT: The new SSD's came in today, so I installed them and setup the new cache array.

 

Did some speedtests on my primary array with ssd caching enabled, the results are pretty insane.

array_speedtest.png

 

~2.75 GB/s write speed

~8.6 GB/s read speed

 

just for fun I cleared out the cache and tested the read speed directly from the disks, 916 MB/s

 

The 850 EVO's were definitely a HUGE upgrade over my previous caching SSD's. Before I was getting around 1.7/1.5 GB/s R/W from the array. and 900/700 MB/s R/W directly from the disks.

 

Capture2.PNG

Capture.PNG

Link to comment
Share on other sites

Link to post
Share on other sites

- update, 2016-Jan-27 -

We're currently working on some modifications/updates for this thread. Among other things, I'm adjusting the script so that it'll be compatible with the new forum software. We're also thinking about modifying the ranking criteria.

Feel free to keep posting. We should be up and running again some time in February 2016 (the storage thread, no idea about the updated forum). :)

- original message -

Just a a general FYI: I'm finishing up the semester in the next two weeks and will update the thread in the week starting 2016-Jan-18. :)

Edited by alpenwasser

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Hardware
CASE: (I forgot)
PSU: Huntkey Panshi 800 (600W cont.)
MB: Asus Z9PE-D16C/2L
CPU: Intel Xeon E5-2620v2 x2 (yes I have two Xeons on one motherboard)
HS: PCCOOLER Yellow Sea Deluxe PWM + PCCOOLER Butterfly PWM
RAM: 8x Kingston KVR 16GB ECC DDR3-1600 = 128GB (half-loaded the MB - I am still looking at fully load it at 256GB)
RAID CARD: Adaptec 6805
HDD 1: 6x 2TB WD Green (mixed model)


Software and Configuration:
It was an ESXi host running up to 16 virtual machines of mixed operating systems. Currently it is temporarily running Windows 8 Pro as part of my NASter reconfiguration project, otherwise it is left whatever OS I left it with last time and in disuse.

Usage:
This server was my main VM host, hosting up to 16 virtual machines that consists of the backbone of my home network. Sadly currently it is in disuse and I have yet to figure out a good purpose for it.

Backup:
No backup. Machine deserted.

Additional info:
This machine was deserted because it "resonated with something uncomfortable within" to my mom. Power consumption and noise level was actually acceptable.

Photo's:

post-233678-0-87310100-1452002974_thumb.

Halfway between the NASter data dump. The 6TB drive and the 3ware 9750-8i RAID card belongs to the NASter machine. That rig currently have a lone 6TB drive (the one you see here) but I do plan to upgrade them to 4x 3TB WD Green.

The Fruit Pie: Core i7-9700K ~ 2x Team Force Vulkan 16GB DDR4-3200 ~ Gigabyte Z390 UD ~ XFX RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ WD Black 2TB ~ macOS Monterey amd64

The Warship: Core i7-10700K ~ 2x G.Skill 16GB DDR4-3200 ~ Asus ROG Strix Z490-G Gaming Wi-Fi ~ PNY RTX 3060 12GB LHR ~ Samsung PM981 1.92TB ~ Windows 11 Education amd64
The ThreadStripper: 2x Xeon E5-2696v2 ~ 8x Kingston KVR 16GB DDR3-1600 Registered ECC ~ Asus Z9PE-D16 ~ Sapphire RX 480 Reference 8GB ~ WD Black NVMe 1TB ~ Ubuntu Linux 20.04 amd64

The Question Mark? Core i9-11900K ~ 2x Corsair Vengence 16GB DDR4-3000 @ DDR4-2933 ~ MSI Z590-A Pro ~ Sapphire Nitro RX 580 8GB ~ Samsung PM981A 960GB ~ Windows 11 Education amd64
Home server: Xeon E3-1231v3 ~ 2x Samsung 8GB DDR3-1600 Unbuffered ECC ~ Asus P9D-M ~ nVidia Tesla K20X 6GB ~ Broadcom MegaRAID 9271-8iCC ~ Gigabyte 480GB SATA SSD ~ 8x Mixed HDD 2TB ~ 16x Mixed HDD 3TB ~ Proxmox VE amd64

Laptop 1: Dell Latitude 3500 ~ Core i7-8565U ~ NVS 130 ~ 2x Samsung 16GB DDR4-2400 SO-DIMM ~ Samsung 960 Pro 512GB ~ Samsung 850 Evo 1TB ~ Windows 11 Education amd64
Laptop 2: Apple MacBookPro9.2 ~ Core i5-3210M ~ 2x Samsung 8GB DDR3L-1600 SO-DIMM ~ Intel SSD 520 Series 480GB ~ macOS Catalina amd64

Link to comment
Share on other sites

Link to post
Share on other sites

Hardware
CASE: Supermicro SuperChassis 847A-R1400LPB
PSU: 2x 1400W high-efficiency (1+1) redundant power supply with PMBus

MB: Supermicro X9DRH-IF

CPU: 2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz

HS: 2x Active Supermicro SNK-P0048AP
RAM: 80GB ECC DDR3 1333
RAID CARD 1: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
RAID CARD 2: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
RAID CARD 3: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
RAID CARD 4: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
RAID CARD 5: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware
SSD 1: 2x 120GB Kingston SV300S37A120G SSD (FreeNAS boot volume)

SSD 2: 2x 500GB Samsung 850 EVO SSD (jails and virtual machines)

HDD 1: 7x 8TB Seagate ST8000AS0002-1NA17Z
HDD 2: 21x 3TB Hitachi HGST HDN724030ALE640

HDD 3: 1x 4TB Hitachi HGST HDS724040ALE640

HDD 4: 4x 4TB Hitachi HGST HDS724040ALE640

HDD 5: 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

SWITCH: Intellinet 523554 24-Port Gigabit Ethernet Rackmount Managed Switch 

UPS: 2x APC Power-Saving Back-UPS ES 8 Outlet 700VA 230V BS 1363

TOTAL RAW STORAGE: 149.24 terabytes

TOTAL USABLE STORAGE: 130.69 terabytes

 

Software and Configuration:
My server is running FreeNAS 9.3.
My main array consists of 4x 7-disk RAIDZ2 VDEVs striped in to a single ZFS pool. My second archival array (which is used to sync periodic snapshots of important parts of the main pool) is a RAIDZ1 VDEV of 7x 8TB disks.
The two smaller 120GB SSDs are in a ZFS mirror for the FreeNAS boot volume, while the second larger 500GB SSDs are in a ZFS mirror for FreeBSD jails and VirtualBox VMs. The last remaining disk is used for scratch space and may be used to replace any failed disks in the other arrays as a "warm spare". I have 2x 4TB disks available as cold spares should any disks fail.

Usage:
Most of the storage is for movies and TV series, with Plex Media Server running inside one of the BSD jails. The remainder is used for personal file storage / networked home directory, backups of my colo servers, and PXE netboot server installation source (one of my FreeBSD jails is my PXE netboot server).

Backup:
Important files from the primary ZFS pool are backed up to the slightly smaller archival ZFS pool via periodic snapshots. The archival pool is physically removed and swapped out to be taken to my storage locker while another set of 7x 8TB disks are swapped in for the next periodic snapshot to be updated.

Additional info:
My whole setup is squeezed in to a 12U rackmount flight case on wheels. This makes for very easy maneuvering of the whole rack in my small "server room" / box-room, as well as being highly convenient for the next time I move home. The internal rack is mounted on to the external case via heavy duty rubber shock-absorbers.

At present the small APC UPS is overloaded and is only really acting as a power smoothing device, so I plan to purchase a suitably rated 2U rack mount UPS in the near future.

 

brienne# (for i in `seq 0 37`; do smartctl -a /dev/da$i | grep 'Device Model:' ; done) | sort | uniq -c

  21 Device Model:     HGST HDN724030ALE640

   1 Device Model:     HGST HDS724040ALE640

   4 Device Model:     Hitachi HDS724040ALE640

   2 Device Model:     KINGSTON SV300S37A120G

   7 Device Model:     ST8000AS0002-1NA17Z

   3 Device Model:     WDC WD30EFRX-68AX9N0

brienne#

 

Photos:

http://imgur.com/a/oRcNe

Rack in a flight case

Front view of my rack

Close up of brienne.mlt

Rear view of my rack

Le FreeNAS

ZFS pools

Filco FTW!

Cat tax

 
 
[root@brienne ~]# zpool iostat -v -T d
Wed Jan 27 20:32:20 GMT 2016
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
archive                                 35.8T  14.7T      0      0  3.39K     10
  raidz1                                35.8T  14.7T      0      0  3.39K     10
    gptid/fa89b7e3-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0  1.28K      4
    gptid/fbf7981a-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0    693      4
    gptid/fd2ef450-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0    864      4
    gptid/fe514fff-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0    132      4
    gptid/ff865f8b-b2d7-11e5-9ebd-0cc47a546fcc      -      -      0      0    182      5
    gptid/00ca81fe-b2d8-11e5-9ebd-0cc47a546fcc      -      -      0      0    183      5
    gptid/01ebff25-b2d8-11e5-9ebd-0cc47a546fcc      -      -      0      0    132      4
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                            1.90G   109G      2     89  10.5K   723K
  mirror                                1.90G   109G      2     89  10.5K   723K
    gptid/ef4efad1-8017-11e5-891c-0cc47a546fcc      -      -      1     13  6.50K   723K
    gptid/ef53fac1-8017-11e5-891c-0cc47a546fcc      -      -      0     13  4.20K   723K
--------------------------------------  -----  -----  -----  -----  -----  -----
jails                                    361G  98.6G     49     78  1.17M   740K
  mirror                                 361G  98.6G     49     78  1.17M   740K
    gptid/71ea795a-809a-11e5-884a-0cc47a546fcc      -      -     26     44  1.16M   744K
    gptid/7223d49f-809a-11e5-884a-0cc47a546fcc      -      -     25     44  1.16M   744K
--------------------------------------  -----  -----  -----  -----  -----  -----
pool1                                   49.6T  26.4T     60     16  7.30M  1.24M
  raidz2                                15.4T  3.62T     18      3  2.23M   210K
    gptid/8273abf3-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K
    gptid/85641e2a-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   333K  46.4K
    gptid/882bf207-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K
    gptid/8b112084-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K
    gptid/8e20deb6-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K
    gptid/91252e88-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K
    gptid/940cb743-889d-11e5-8ab5-0cc47a546fcc      -      -      9      0   334K  46.4K
  raidz2                                15.4T  3.57T     18      3  2.30M   291K
    gptid/5134e3f9-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K
    gptid/5413e683-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K
    gptid/56fc5d0d-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K
    gptid/59dfecca-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K
    gptid/5cb31b99-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K
    gptid/5f9c7a47-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K
    gptid/62810541-88a2-11e5-8582-0cc47a546fcc      -      -      9      1   345K  64.0K
  raidz2                                17.0T  2.00T     18      3  2.28M   263K
    gptid/18451b24-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K
    gptid/1b2f4e11-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K
    gptid/1e285370-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K
    gptid/210413c7-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K
    gptid/23de6209-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K
    gptid/26bd5abc-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K
    gptid/29b1b41f-8ee7-11e5-8fb5-0cc47a546fcc      -      -      9      1   342K  57.6K
  raidz2                                1.77T  17.2T      4      5   502K   509K
    gptid/94bf7a70-b7b6-11e5-8976-0cc47a546fcc      -      -      2      1  73.2K   111K
    gptid/97f04c90-b7b6-11e5-8976-0cc47a546fcc      -      -      2      1  73.2K   111K
    gptid/9b0ad9ef-b7b6-11e5-8976-0cc47a546fcc      -      -      2      1  73.3K   111K
    gptid/9e2b3065-b7b6-11e5-8976-0cc47a546fcc      -      -      2      1  73.2K   111K
    gptid/9feb05f9-b7b6-11e5-8976-0cc47a546fcc      -      -      1      1  73.4K   111K
    gptid/a11775af-b7b6-11e5-8976-0cc47a546fcc      -      -      1      1  73.4K   111K
    gptid/a245e539-b7b6-11e5-8976-0cc47a546fcc      -      -      1      1  73.4K   111K
--------------------------------------  -----  -----  -----  -----  -----  -----
pool2                                    353G  3.28T     16     12  2.00M   651K
  gptid/9ccc277d-826c-11e5-8238-0cc47a546fcc   353G  3.28T     16     12  2.00M   651K
--------------------------------------  -----  -----  -----  -----  -----  -----
 
[root@brienne ~]# for dev in ada0 ada1 $(perl -e 'print "da$_ " for 0 .. 37'); do echo -n "/dev/$dev"; smartctl -a /dev/$dev | perl -ne 'if (/(?:Device Model|User Capacity):\s+(.+?)\s*$/) { print "\t$1"; }'; echo; done
/dev/ada0 Samsung SSD 850 EVO 500GB 500,107,862,016 bytes [500 GB]
/dev/ada1 Samsung SSD 850 EVO 500GB 500,107,862,016 bytes [500 GB]
/dev/da0 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da1 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da2 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da3 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da4 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da5 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da6 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da7 HGST HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB]
/dev/da8 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da9 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da10 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da11 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da12 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da13 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da14 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da15 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da16 KINGSTON SV300S37A120G 120,034,123,776 bytes [120 GB]
/dev/da17 KINGSTON SV300S37A120G 120,034,123,776 bytes [120 GB]
/dev/da18 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB]
/dev/da19 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB]
/dev/da20 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB]
/dev/da21 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB]
/dev/da22 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB]
/dev/da23 WDC WD30EFRX-68AX9N0 3,000,592,982,016 bytes [3.00 TB]
/dev/da24 WDC WD30EFRX-68AX9N0 3,000,592,982,016 bytes [3.00 TB]
/dev/da25 WDC WD30EFRX-68AX9N0 3,000,592,982,016 bytes [3.00 TB]
/dev/da26 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB]
/dev/da27 ST8000AS0002-1NA17Z 8,001,563,222,016 bytes [8.00 TB]
/dev/da28 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB]
/dev/da29 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB]
/dev/da30 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da31 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da32 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da33 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da34 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da35 HGST HDN724030ALE640 3,000,592,982,016 bytes [3.00 TB]
/dev/da36 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB]
/dev/da37 Hitachi HDS724040ALE640 4,000,787,030,016 bytes [4.00 TB]
[root@brienne ~]#
Edited by MyNameIsNicola

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

Hardware

CASE: Supermicro SuperChassis 847A-R1400LPB

PSU: 2x 1400W high-efficiency (1+1) redundant power supply with PMBus

MB: Supermicro X9DRH-IF

CPU: 2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz

HS: 2x Active Supermicro SNK-P0048AP

RAM: 80GB ECC DDR3 1333

RAID CARD 1: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

RAID CARD 2: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

RAID CARD 3: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

RAID CARD 4: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

RAID CARD 5: LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

SSD 1: 2x 120GB Kingston SV300S37A120G SSD (FreeNAS boot volume)

SSD 2: 2x 500GB Samsung 850 EVO SSD (jails and virtual machines)

HDD 1: 7x 8TB Seagate ST8000AS0002-1NA17Z

HDD 2: 21x 3TB Hitachi HGST HDN724030ALE640

HDD 3: 1x 4TB Hitachi HGST HDS724040ALE640

HDD 4: 4x 4TB Hitachi HGST HDS724040ALE640

HDD 5: 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

SWITCH: Intellinet 523554 24-Port Gigabit Ethernet Rackmount Managed Switch 

UPS: 2x APC Power-Saving Back-UPS ES 8 Outlet 700VA 230V BS 1363

TOTAL RAW STORAGE: 149.24 terabytes

TOTAL USABLE STORAGE: 130.69 terabytes

 

Software and Configuration:

My server is running FreeNAS 9.3.

My main array consists of 4x 7-disk RAIDZ2 VDEVs striped in to a single ZFS pool. My second archival array (which is used to sync periodic snapshots of important parts of the main pool) is a RAIDZ1 VDEV of 7x 8TB disks.

The two smaller 120GB SSDs are in a ZFS mirror for the FreeNAS boot volume, while the second larger 500GB SSDs are in a ZFS mirror for FreeBSD jails and VirtualBox VMs. The last remaining disk is used for scratch space and may be used to replace any failed disks in the other arrays as a "warm spare". I have 2x 4TB disks available as cold spares should any disks fail.

Usage:

Most of the storage is for movies and TV series, with Plex Media Server running inside one of the BSD jails. The remainder is used for personal file storage / networked home directory, backups of my colo servers, and PXE netboot server installation source (one of my FreeBSD jails is my PXE netboot server).

Backup:

Important files from the primary ZFS pool are backed up to the slightly smaller archival ZFS pool via periodic snapshots. The archival pool is physically removed and swapped out to be taken to my storage locker while another set of 7x 8TB disks are swapped in for the next periodic snapshot to be updated.

Additional info:

My whole setup is squeezed in to a 12U rackmount flight case on wheels. This makes for very easy maneuvering of the whole rack in my small "server room" / box-room, as well as being highly convenient for the next time I move home. The internal rack is mounted on to the external case via heavy duty rubber shock-absorbers.

At present the small APC UPS is overloaded and is only really acting as a power smoothing device, so I plan to purchase a suitably rated 2U rack mount UPS in the near future.

 

brienne# (for i in `seq 0 37`; do smartctl -a /dev/da$i | grep 'Device Model:' ; done) | sort | uniq -c

  21 Device Model:     HGST HDN724030ALE640

   1 Device Model:     HGST HDS724040ALE640

   4 Device Model:     Hitachi HDS724040ALE640

   2 Device Model:     KINGSTON SV300S37A120G

   7 Device Model:     ST8000AS0002-1NA17Z

   3 Device Model:     WDC WD30EFRX-68AX9N0

brienne#

 

Photos:

http://imgur.com/a/oRcNe

 

 

Killer setup.  I've been keeping my eye on that SuperMicro chassis for my next setup.  How bad is the noise from the chassis with the two 1400W PSU?

 

Can you go into more detail as to why you chose to go with the 4 x 7-disk vdevs striped into a single pool vs one single pool?  This feels like a functional equivalent of a RAID 60. 

 

Why no SSD-based L2ARC with pools that large to hold meta data if not anything else?

 

Love the cat pic!  I've got several of those over the years as "helpers" putting their fuzzy heads into my PC building business.  :-)

Workstation 1: Intel i7 4790K | Thermalright MUX-120 | Asus Maximus VII Hero | 32GB RAM Crucial Ballistix Elite 1866 9-9-9-27 ( 4 x 8GB) | 2 x EVGA GTX 980 SC | Samsung 850 Pro 512GB | Samsung 840 EVO 500GB | HGST 4TB NAS 7.2KRPM | 2 x HGST 6TB NAS 7.2KRPM | 1 x Samsung 1TB 7.2KRPM | Seasonic 1050W 80+ Gold | Fractal Design Define R4 | Win 8.1 64-bit
NAS 1: Intel Intel Xeon E3-1270V3 | SUPERMICRO MBD-X10SL7-F-O | 32GB RAM DDR3L ECC (8GBx4) | 12 x HGST 4TB Deskstar NAS | SAMSUNG 850 Pro 256GB (boot/OS) | SAMSUNG 850 Pro 128GB (ZIL + L2ARC) | Seasonic 650W 80+ Gold | Rosewill RSV-L4411 | Xubuntu 14.10

Notebook: Lenovo T500 | Intel T9600 | 8GB RAM | Crucial M4 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

------SNIP------

 

How are u running VM's within freenas?

Link to comment
Share on other sites

Link to post
Share on other sites

Killer setup.  I've been keeping my eye on that SuperMicro chassis for my next setup.  How bad is the noise from the chassis with the two 1400W PSU?

 

Can you go into more detail as to why you chose to go with the 4 x 7-disk vdevs striped into a single pool vs one single pool?  This feels like a functional equivalent of a RAID 60. 

 

Why no SSD-based L2ARC with pools that large to hold meta data if not anything else?

 

Love the cat pic!  I've got several of those over the years as "helpers" putting their fuzzy heads into my PC building business.  :-)

 

The noise is perfectly acceptable when the door is closed. I have Zabbix monitoring of every temperature sensor I can get my grubby little fingers on just in case keeping the door shut makes it a bit too warm in there. So far so good, but I've not had a summer in this house with that setup yet so it remains to be seen if it will be feasible to keep the door closed over the summer or not.

 

Yes, the primary ZFS pool is analogous to a RAID60 configuration.

 

I wasn't comfortable with the redundancy of a 28 disk RAIDZ3 VDEV, and I wanted better capacity efficiency than a set of striped mirrors, so striping some smaller RAIDZ2 VDEVs seemed like a reasonable trade off for capacity efficiency, redundancy and performance. Zabbix has indicated a 2 gigabytes/second read from the array when I was poking around with some data on a local BSD jail, but given that I only have two un-bonded gigabit NICs on the machine, I'm unlikely to ever stress the disks.

 

I'll do some performance testing at some point and see what peak read and write throughput I can get on each ZFS pool and post them later in the week.

 

Given the relatively low load on the pools and the large L1ARC in the 80GB of RAM, I didn't feel that an SSD L2ARC would have added much. In any case, I've filled all the internal and external disk bays so there's not currently anywhere to put an additional set of SSDs for an L2ARC anyway.

 

Kitty cats power teh Interwebz, so it seems rude not to pay out kitty taxes when posting. ;-) heh

 

czVzSAr.png w2ZdS7N.png UbvUMGX.png8ymrbbT.png

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

How are u running VM's within freenas?

 

VirtualBox.

WWW: https://nicolaw.uk   CASE: Supermicro SuperChassis 847A-R1400LPB   MBSupermicro X9DRH-IF   CPU2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz   RAM: 80GB ECC DDR3 1333   NICIntel E10G42BFSR X520-SR2   HBA: 5x LSI 9211-8i 6GB/s SAS HBA 'IT' Firmware   HDD/SSD2x 120GB Kingston SV300S37A120G SSD 2x 500GB Samsung 850 EVO SSD 8x 8TB Seagate ST8000AS0002-1NA17Z 21x 3TB Hitachi HGST HDN724030ALE640 4x 4TB Hitachi HGST HDS724040ALE640 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

 

Hardware

CASE:
 Supermicro SuperChassis 847A-R1400LPB

PSU:
 2x 1400W high-efficiency (1+1) redundant power supply with PMBus

MB: 
Supermicro X9DRH-IF

CPU:
 2x Intel® Xeon® CPU E5-2603 v2 @ 1.80GHz

HS: 
2x Active Supermicro SNK-P0048AP

RAM:
 80GB ECC DDR3 1333

RAID CARD 1:
 LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

RAID CARD 2:
 LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

RAID CARD 3:
 LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

RAID CARD 4:
 LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

RAID CARD 5:
 LSI 9211-8i 6GB/s SAS HBA w/"IT" firmware

SSD 1:
 2x 120GB Kingston SV300S37A120G SSD (FreeNAS boot volume)

SSD 2:
 2x 500GB Samsung 850 EVO SSD (jails and virtual machines)

HDD 1:
 7x 8TB Seagate ST8000AS0002-1NA17Z

HDD 2:
 21x 3TB Hitachi HGST HDN724030ALE640

HDD 3:
 1x 4TB Hitachi HGST HDS724040ALE640

HDD 4:
 4x 4TB Hitachi HGST HDS724040ALE640

HDD 5:
 3x 3TB Western Digital Red WDC WD30EFRX-68AX9N0

SWITCH:
 Intellinet 523554 24-Port Gigabit Ethernet Rackmount Managed Switch 

UPS:
 2x APC Power-Saving Back-UPS ES 8 Outlet 700VA 230V BS 1363

TOTAL RAW STORAGE: 
149.24 terabytes

TOTAL USABLE STORAGE: 
130.69 terabytes

 

Backup:

Important files from the primary ZFS pool are backed up to the slightly smaller archival ZFS pool via periodic snapshots. The archival pool is physically removed and swapped out to be taken to my storage locker while another set of 7x 8TB disks are swapped in for the next periodic snapshot to be updated.

 

[/indent]

 

 

Regarding the backup, you're using automatic snapshots and then using replication tasks to backup the data to a different volume? I couldn't really find a lot of guides on this and this was the approach I took - wasn't sure if there was a better method. I don't have very large changes frequently so it's working just fine, most in a day I might have is 30gb.

 

Anything in this setup you wish you did differently? If I may ask, how much invested do you have in this setup? I feel the burn and I'm only $2k into mine.
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×