Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

@alpenwasser 

 

Thank you!

 

 

@IdeaStormer

 

I guess that would be a possibility if I wanted to keep the case. It's REALLY flimsy, and it obviously doesn't facilitate the cooling I want, hence the zip-ties :-)

I'm thinking about making something similar to Spotswood's "cases", I like the simplicity and durability.

Link to post
Share on other sites

Hardware
CASE: Fractal Design Node 804
PSU: Corsair RM450
MB: Asus H97M-E
CPU: Intel Core i3-4150
HS: Stock Intel heatsink
RAM: 16GB DDR3 1600 (2x 8GB)
RAID CARD: LSI MegaRAID 9266-8i
SSD: Samsung 840 Evo 120GB
HDD: 4x 3TB Western Digital Red (12TB Total)

Software and Configuration:

My server is running Windows Server 2012R2 Datacenter.

The array is a RAID5 of the 4, 3TB drives giving me about 8.5TB of usable storage space.

Usage:
This is the backup location for all the other computers in the house. It also acts as a plain old file server, a DLNA server, a teamspeak server, a minecraft server, and some other small random tasks.

Backup:
Currently, this computer is not really backed up. The SSD with the OS backs up to the RAID 5, and all other computers in the house run weekly differential backups to it though. In the near future I plan on getting a BackBlaze subscription and seeing how I like it.

Additional info:
This thing has a really interesting fan mount to cool the RAID card down. You can check it out at the build log thread I made

Photo's:

post-124185-0-61508500-1411922035_thumb.

post-124185-0-17275100-1411922038_thumb.

 

For more, check out the Build Log.

Link to post
Share on other sites

- snip -

Hehe, definitely digging that RAID cooling setup, welcome

to the list! :)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

Hehe, definitely digging that RAID cooling setup, welcome

to the list! :)

 

Thanks, its kind of a ghetto cooler, but it works, and works well haha. 

Link to post
Share on other sites

Why not making a competion who would reach 1 peta byte first? That would be far more interesting

Grant us the joy of song and dance and ever watch over us in the lonely places in which we must walk - Pray to Bastet

Link to post
Share on other sites

Why not making a competion who would reach 1 peta byte first? That would be far more interesting

You do realize how much storage 1 petabyte is and how much equipment and money you'd need to reach 1 petabyte right?

 

Lets say you'd try to reach 1 petabyte with 4TB drives as they, within the range of 4-6TB drives, comes with the best GB/price ratio. Then you'd need 250 drives.

If you had endless money and went with 6TB drives you would still need 167 drives.

Main Rig: Specs found in my profile details. Albums of changes made to it, First SSD and Installation of NH-D14.

NAS build log: Gimli, a NAS build by Shaqalac.

Mechanical keyboards: Ducky Mini YotHDucky Shine 3 Tuhaojin - Ducky Mini - SteelSeries 6gv2

Link to post
Share on other sites

You do realize how much storage 1 petabyte is and how much equipment and money you'd need to reach 1 petabyte right?

Lets say you'd try to reach 1 petabyte with 4TB drives as they, within the range of 4-6TB drives, comes with the best GB/price ratio. Then you'd need 250 drives.

If you had endless money and went with 6TB drives you would still need 167 drives.

Well we will soon have 10 TB HDD thus it will only require 100 if everyone gather at least 5 of them you will reach petabytes easily or if you gather all together at least one 10tb each

You do realize how much storage 1 petabyte is and how much equipment and money you'd need to reach 1 petabyte right?

Lets say you'd try to reach 1 petabyte with 4TB drives as they, within the range of 4-6TB drives, comes with the best GB/price ratio. Then you'd need 250 drives.

If you had endless money and went with 6TB drives you would still need 167 drives.

Also just the top 65 have a total of around 1.7peta bytes in total so you just need to gather together and have an extreme raid party :D

Grant us the joy of song and dance and ever watch over us in the lonely places in which we must walk - Pray to Bastet

Link to post
Share on other sites

Why not making a competion who would reach 1 peta byte first? That would be far more interesting

 

It already is a competition, first of all about who has the most

storage, and secondly about how much combined capacity we can

muster up. ;)

 

What's the point? That would take many years before any one got anywhere near that.

You do realize how much storage 1 petabyte is and how much equipment and money you'd need to reach 1 petabyte right?

Lets say you'd try to reach 1 petabyte with 4TB drives as they, within the range of 4-6TB drives, comes with the best GB/price ratio. Then you'd need 250 drives.

If you had endless money and went with 6TB drives you would still need 167 drives.

Indeed. Considering the entire list is only at roughly 1.7 PB at

the moment, I think we're still a few years off from a single

(private) system having that much storage.

 

 

Well we will soon have 10 TB HDD thus it will only require 100 if everyone gather at least 5 of them you will reach petabytes easily or if you gather all together at least one 10tb each

Also just the top 65 have a total of around 1.7peta bytes in total so you just need to gather  together and have an extreme raid party :D

Uhm, we live all over the world, so I don't see how that's doable.

Also we kind of need our storage actually, it's not just for shits

and giggles.

You are of course free to build a 1 PB system, or contribute to

the list with more modest numbers... ;)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

This thread might just be the kick in the pants I needed to swap in 3 3TB drives into my server that have been collecting dust since April, but here's the current setup.

 

Hardware

CASE: Rosewill Blackhawk Ultra

PSU: XFX ProSeries 750W XXX Edition

MB: SuperMicro X7DWN+

CPU: 2x Intel Xeon L5420

RAM: 16x2GB Nanya NT2GT72U4NB1BN-3C DDR2 FB-DIMMs (32GB total)

RAID CARD: Areca ARC-1680ix-16 w/ 4GB ECC DDR2 DIMM+ Backup battery

LAN CARD: 2x Intel PRO/1000 PT Dual Port Server Adapter

SOUND CARD: Creative Labs Sound Blaster X-Fi Fatal1ty

SSD: SanDisk Extreme 120GB

HDD1: 6x 3TB Toshiba DT01ACA300

HDD2: 2x 3TB Seagate ST3000DM001

HDD3: 4x 1.5TB Seagate ST1500DM003

HDD4: 1 2TB Hitachi HDS722020ALA330

 

Waiting to replace some drives: 3 more Toshiba DT01ACA300

 

Software and Configuration:

This server is running Windows Server 2008 R2 currently with two hardware raid-6 volumes.

The 2TB replaced a dying 1.5TB Seagate, thus 500GB is being wasted there.

Current total usable capacity to OS is 22.5TB from 32TB worth of drives installed.

 

Usage:

System runs Windows Server Update Services and I have all my other windows machines get Windows

updates through it and save bandwidth/download times every patch tuesday.  Machine also stores all my

media (movies, music, TV shows, other videos) as well as other things I download.  I tend to keep every

version of NVIDIA and AMD video drivers I download, every Java JRE/JDK update, every LibreOffice and

OpenOffice I download, etc.  Also a large collection of ISOs for various operating systems, etc.  Also has

installers for various games that were downloaded from download helper apps or DRM free humble bundle

purchases.  I also have a large collection of software from my brief access to MSDN-AA.  The system also

runs a Ubuntu Server Guest under Hyper-V which mounts the Windows Server host via NFS for /home and

/var/www.

 

Backup:

Currently only relying on raid-6 to save me.  Have enough blank BD-R disks that I could archive

most of my movies/TV shows at some point in the future.

 

Additional info:

This machine idles at about 330watts and even with the 3 extra Antec spot coolers temperature alarms

will sound if I use this machine for heavy CPU workloads.  Fully Buffered DIMMs are horrible power

sucking heat generators.  Besides the Onboard NICs, I use additional network cards and have this

machine connected to a managed switch with a link-aggregation-group as well as dedicated connections

for the VM and RAID card.  One additional CAT5 cable for the KVM Transceiver.  Theres no good reason

i have the sound card installed other than the fact the motherboard has no audio and I wanted the case's

front panel connectors to actually all be wired up properly.

 

Photos:

open.jpg

Inside the server

 

env.jpg

Machine in its native environment

 

Areca.png

Out of band management to Areca RAID controller

 

Drives.png

Storage under OS

Link to post
Share on other sites

zyk

Wait, do you have 16 sticks of 2GB ram?

 

There's something oddly satisfying seeing a case full of HDDs.

 

It's an interesting cooling solution you got.

Main Rig: Specs found in my profile details. Albums of changes made to it, First SSD and Installation of NH-D14.

NAS build log: Gimli, a NAS build by Shaqalac.

Mechanical keyboards: Ducky Mini YotHDucky Shine 3 Tuhaojin - Ducky Mini - SteelSeries 6gv2

Link to post
Share on other sites

zyk

Wait, do you have 16 sticks of 2GB ram?

 

There's something oddly satisfying seeing a case full of HDDs.

 

It's an interesting cooling solution you got.

 

Yes, 16 2GB sticks of old FBDIMMs

ram.png

Link to post
Share on other sites

-snip-

That is one nice server you have.

Rig CPU Intel i5 3570K at 4.2 GHz - MB MSI Z77A-GD55 - RAM Kingston 8GB 1600 mhz - GPU XFX 7870 Double D - Keyboard Logitech G710+

Case Corsair 600T - Storage Intel 330 120GB, WD Blue 1TB - CPU Cooler Noctua NH-D14 - Displays Dell U2312HM, Asus VS228, Acer AL1715

 

Link to post
Share on other sites

Yes, 16 2GB sticks of old FBDIMMs

 

FBDIMMs, man you must of gotten a deal because they're a bit more than regular RAM.

 

Good use of an L5420, even the X56xx's are dropping in price to the point that mere mortals can buy them.

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to post
Share on other sites

I am currently number 10 on my own list and this is unacceptable.

 

This is why I am going to upgrade my server.

Sadly I'm already using all of my drive bays on my RPC-4220.

 

To solve this I will use my DL320:

IMG_0594%20(Medium).JPG

 

The server is currently used as a test server for storage things as I like to play with its 12 15K SAS drive.

For the update I will switch those SAS drives out for 4TB SATA drives.

 

The DL320 will have a IBM M1015 RAID card in IT mode.

One SFF-8087 cable will go to the DL320's backplane and the other one will go outside via a SFF-8087 to SFF-8088 converter.

Then the SFF-8088 cable will go back into my old RPC-4220 chassis where it will hook up with a INTEL RES2SV240 SAS expander which will in turn be connected to my drives in the RPC-4220.

 

RAID_RES2SV240.jpg.rendition.cq5dam.thum

 

The entire system will be running on the Open Media Vault (OVM) distro which is Debian with a fancy GUI for storage.

The software RAID will look like this:

 

bcce5be0c2.png

 

The ETA is currently set to "I dunno", I have ordered most of the things but I still need to sync some data with Ssoele so he can act as a backup while the system is down as I will have to whipe most data in this process.

I hope to have it finished on the 31st of October so I can copy the data back to my server when I'm attending a LAN party.

 

But like I said the ETA is really just "I dunno" as I'm flying out to Istanbul tomorrow for my work for a unknown amount of time...

Respect the Code of Conduct!

>> Feel free to join the unofficial LTT teamspeak 3 server TS3.schnitzel.team <<

>>LTT 10TB+ Topic<< | >>FlexRAID Tutorial<<>>LTT Speed wave<< | >>LTT Communies and Servers<<

Link to post
Share on other sites

Hardware
CASE: Synology DS 1817+
PSU: Built-in 250W

UPS: 350W APC Back-UPS
MB: Synologys Own
CPU: Intel Atom C2538 (Quad Core 2.4 GHz)
HS: Stock Synology heatsink (Passive)
RAM: 8GB DDR3 1333
Extension Unit: None

Raid Controller: Intel ICH10R
HDD 1:  8TB Western Digital Red, WD80EFZX-68UW8N0
HDD 2:  8TB Western Digital Red, WD80EFZX-68UW8N0
HDD 3:  8TB Western Digital Red, WD80EFZX-68UW8N0
HDD 4:  8TB Western Digital Red, WD80EFZX-68UW8N0

HDD 5:  8TB Western Digital Red, WD80EFZX-68UW8N0

HDD 6:  8TB Western Digital Red, WD80EFZX-68UW8N0

HDD 7:  8TB Western Digital Red, WD80EFZX-68UW8N0

HDD 8:  Empty Slot (will be filled when capacity needs increase)

RAID: SHR-2

 

Software and Configuration:
I'm running Synology DSM 6.1.4-15217 Update 5

The Disks are in a SHR configuration with a fault Tolerance of two Disks. The Filesystem is Formatted in BTRFS.
Usage:
I use the storage for movies and series, it has SickBeard and CouchPotato with Transmission set up on it, so that populates my Libraries. i have XBMC and Plex running on a dedicated Machine. I also use the NAS for practically all my files and Backup for my PCs
Backup:
Important Files are backed up both on Dropbox and Google Drive.

Additional info:

There is a PCIe Slot in the DS 1817+ that is much talked about. It could host either a 10GbE NIC or a M.2 SSD. In my setup it is currently unpopulated, since the troughput or read access time is not an issue for me at the moment.


Photos:

Picture one is of the NAS with the UPS, second one is of the Disk Station Manager, showing the config, third one is how they show up in Windows

5a54b33704fe4_2018-01-0910_15_43.thumb.jpg.dcb4d0d4c96fbf64927a988b14a31747.jpg

5a54b365c8ea1_Screenshot2018-01-0912_58_06.png.b71496c1d7aa97a5548fa39e1d8544fa.png

5a54b39e2c0b8_Screenshot2018-01-0912_59_32.png.11ed055069b3ef6fca459e743eb43946.png

 

 

 

Link to post
Share on other sites

- snip -

Damn, those RAM sticks look so beast. :D

 

 

Good use of an L5420, even the X56xx's are dropping in price to the point that mere mortals can buy them.

Yeah, I paid ~40 USD per CPU when I bought my L5630s in spring (before

shipping and VAT). Of course, they're not exactly the most high-end of

the line (I paid about 900 USD per CPU for my X5680s back in fall of

2012 IIRC), but still easily enough for a SOHO server. :)

 

 

I am currently number 10 on my own list and this is unacceptable.

Yeah dude, you've been slacking... :P

 

 

- snip -

Not bad, not bad at all. Welcome to the forum, and the list! :)

Damn, the past week has been great for this list. wohoo.gif

EDIT:

Oh, we've also passed 300,000 views:

2014-10-01--22-16-37--300k-views-storage

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

First off I would like to say that there are some awesome systems on this board. I am new to the NAS area of computers, but they have always interested me and I am looking into building one myself. I am not sure if this is the place to ask questions but I was wondering peoples opinions on WD REDs versus WD GREENs? I see a large number of people use greens but when you go to any computer store they recomend RED, so are they wrong in recomending REDs or is this just more about personal preference and cost?

Link to post
Share on other sites

First off I would like to say that there are some awesome systems on this board. I am new to the NAS area of computers, but they have always interested me and I am looking into building one myself. I am not sure if this is the place to ask questions but I was wondering peoples opinions on WD REDs versus WD GREENs? I see a large number of people use greens but when you go to any computer store they recomend RED, so are they wrong in recomending REDs or is this just more about personal preference and cost?

Hey, welcome. :)

It depends on how you implement your NAS. Greens lack, among other

things, something called TLER, which can lead to trouble with them

being dropped from RAID arrays. Very roughly speaking, when it

encounters an issue reading data, it will keep trying to read that

data for quite a while until there's either a time limit reached or

until it succeeds.

For many RAID controllers, that time limit is so large that the RAID

controller will basically lose its patience and just mark the drive

as non-functional (since it hasn't heard from the drive in too long a

time span, basically). It will then try grab the data from another

source (be that parity data in a RAID5/6/7 array or another drive in

a mirror configuration) if one is available.

Red drives (or RAID-optimized drives in general) will not keep trying

for as long to read data they're having difficulty with, but instead will

just tell the controller "Boss, I'm having some issues here, get this

stuff from one of the other drives. In general I'm fine though." The

controller will then get the data from another source without dropping

the troubled HDD from its array.

Both approaches make sense in their correct use cases: If you're not

running a RAID setup, you probably only have one copy of your data

online (although you do hopefully have a backup), so the drive keeping

to try to read the data is the sensible approach because it can't be

gotten from anywhere else (except a backup, which isn't very convenient

often).

In a RAID array with redundancy, this isn't what we want though since

we have secondary sources for the data, so it doesn't make sense to

keep waiting and waiting for that one drive to give it to us if it's much

faster to just get it from somewhere else (this doesn't apply to RAID0

arrays of course, where you have no redundancy at all).

Depending on your RAID controller, the time limit until it tries to fetch

the data from another source can be adjusted, but I haven't personally

worked with one of those yet, so I can't really say much on how effective

that strategy is when trying to work around dropped drives.

I think @MG2R recently had this happen to him, so it can even be an issue

with software RAID, not just hardware implementations.

So, if you're just running JBOD (no RAID config), the Greens might be

perfectly fine for the job; if you're intending to run RAID configs,

the Reds are generally considered more suitable. I'm running both Greens

and Reds in that config and so far all is good. :)

I'm sure there are more differences between the Greens and the Reds, but

that's the main one I usually see thrown around, maybe @Captain_WD has

some more input or can correct any erroneous assumptions I've made here.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

Alpenwasser thanks for the info. Just for clarification I am looking to run a raid 6 array in freenas without a raid controller. So for this based on what I read I should be fine using green drives? However in the future when I want to use a ibm m1015 flashed to it mode on a raid 6 array I should preferably stick with RED drives? 

Link to post
Share on other sites

Alpenwasser thanks for the info. Just for clarification I am looking to run a raid 6 array in freenas without a raid controller. So for this based on what I read I should be fine using green drives? However in the future when I want to use a ibm m1015 flashed to it mode on a raid 6 array I should preferably stick with RED drives?

Reds in both cases I'd say. While I haven't found a huge deal of info

on how software RAID behaves with regards to dropping unresponsive drives,

MG2R's experiences with mdadm lead me to err on the side of caution on

this one.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to post
Share on other sites

<red vs greens?>

 

I think @MG2R recently had this happen to him, so it can even be an issue

with software RAID, not just hardware implementations.

 

I have been running three greens in a RAID 5 setup for three years now,

with two different versions of greens. Never experienced a problem until a

month (or two?) ago, where one of my green drives fell out of the array

for no apparent reason. The drive itself was perfectly fine, but the software

RAID controller (mdadm in this case) marked the drive as failed and dropped

it.

 

Since this was a RAID 5, I was able to rebuild the array by re-adding the

dropped drive, but during the time that drive was dropped/the array was

rebuilding, my array was vulnerable (one more failed drive, and I would've

lost all my data. The issue, although not confirmed, is most likely the TLER

quirk mentioned above.

 

When using a RAID setup, it's generally assumed unsafe to use WD Green

drives. As such, I must recommend you to not buy them if you're planning on

running them in RAID.

 

On the other hand, I've been running them happily for quite some time now

and, if you have a system which informs you of a dropped drive immediately,

it might actually not be that much more hazardous to use them. Whether you

actually should depends on how comfortable you are with taking (calculated)

risks with your data.

 

Whatever you end up doing, be sure to back up your data. Even with a RAID

array, backups are very much needed.

 

Also, you might want to check out Seagate Barracuda drives. In some countries,

they're considerably cheaper than WD REDs.

 

 

PS: the TLER issue is an issue of RAID controllers in general, whether they be

hardware or software.

 

 

EDIT:

just checked out drive ages. (actual power on hours, not simply age, which is more)

  • Oldest green drive:    25465 hours (4.004 years)
  • Newer green drive 1: 11554 hours (1.319 years)
  • Newer green drive 2: 12446 hours (1.421 years)
  • Seagate Barracuda:    6238 hours (0.712 years)

So, the oldest one has been running way longer than I thought :P

 

EDIT EDIT:

Lol, the Kingston SSD in the system reports a power on time of 192199786507591 hours, which seems... a tad high :)

Edited by MG2R
Link to post
Share on other sites

First off I would like to say that there are some awesome systems on this board. I am new to the NAS area of computers, but they have always interested me and I am looking into building one myself. I am not sure if this is the place to ask questions but I was wondering peoples opinions on WD REDs versus WD GREENs? I see a large number of people use greens but when you go to any computer store they recomend RED, so are they wrong in recomending REDs or is this just more about personal preference and cost?

 

WD Greens are really good for general storage on your PC system. They're low power, 5400 RPM drives with pretty decent performance in my experience, I have two.

WD Reds on the other hand are designed with the NAS environment in mind. In NASes, lots of drives (2-8 drives are recommended by WD) are packed together really closely. The combined vibration of all of those drives can potentially damage or reduce the life span of a drive that isn't design to run in an environment like that. Plus WD Reds are designed for 24/7 usage, WD Greens aren't built to be spinning 24/7 and it is assumed that you'll be turning off your system or at least let them sit there idle.

Basically WD Reds for NASes, WD Greens for your general PC storage. Can't go wrong :)

                    Fractal Design Arc Midi R2 | Intel Core i7 4790k | Gigabyte GA-Z97X-Gaming GT                              Notebook: Dell XPS 13

                 16GB Kingston HyperX Fury | 2x Asus GeForce GTX 680 OC SLI | Corsair H60 2013

           Seasonic Platinum 1050W | 2x Samsung 840 EVO 250GB RAID 0 | WD 1TB & 2TB Green                                 dat 1080p-ness

Link to post
Share on other sites

EDIT EDIT:

Lol, the Kingston SSD in the system reports a power on time of 192199786507591 hours, which seems... a tad high :)

Maybe, just a little.

Or, your drive might just be 21 million millenia old (in which case, I think Kingston has won the award for highest drive endurance, and should probably be awarded a Nobel prize). ;)

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Newegg

×