Jump to content

[Advice] how to get started - home storage server

Hi all!

 

I'm interesting in buying a storage server.

I just need it to storage files. not something fancy though I want it to be safe.

I have at least 3 options and its hard for me to decide, I need more info.

first option:

Buy off the shelf product such as the Western Digital My Cloud EX4100 Expert Series 4-Bay WDBWZE0000NBK

fill it with 4/2 HDs: Western Digital Red 6TB 64MB Sata III WD60EFRX

Second Option:

Build my own server, probably gonna cost more but it comes with more upgrade options.

third options:

My PC is old. I have the first i5 generation so at least 7 years old?

since im gaming on consoles it doesn't really bather me.

but I was thinking about buying a new PC and config a raid inside it.

 

A question along the way:

how is it likely for a raid control to fail?

whats better - a raid card or use the onboard raid controller on a motherboard

which raid config is best for safty? raid 1, 1+0,5,6

 

Thank you in advance!

Death.

 

ps

I'm buying for either a local store or amazon, so if you want to give me a link of anything amazon would be great.

Link to comment
Share on other sites

Link to post
Share on other sites

If all you want is basic data storage, grab an old Del OptiPlex for free (they’re everywhere) and use FreeNAS. That’s what I’ve done, and it’s worked wonders. 

Main System: Phobos

AMD Ryzen 7 2700 (8C/16T), ASRock B450 Steel Legend, 16GB G.SKILL Aegis DDR4 3000MHz, AMD Radeon RX 570 4GB (XFX), 960GB Crucial M500, 2TB Seagate BarraCuda, Windows 10 Pro for Workstations/macOS Catalina

 

Secondary System: York

Intel Core i7-2600 (4C/8T), ASUS P8Z68-V/GEN3, 16GB GEIL Enhance Corsa DDR3 1600MHz, Zotac GeForce GTX 550 Ti 1GB, 240GB ADATA Ultimate SU650, Windows 10 Pro for Workstations

 

Older File Server: Yet to be named

Intel Pentium 4 HT (1C/2T), Intel D865GBF, 3GB DDR 400MHz, ATI Radeon HD 4650 1GB (HIS), 80GB WD Caviar, 320GB Hitachi Deskstar, Windows XP Pro SP3, Windows Server 2003 R2

Link to comment
Share on other sites

Link to post
Share on other sites

Go for a home server with FreeNAS or unraid. You can get a small case with 4+ drive bays (I would recommend external drive bays for easy access) and use a celeron+itx mobo. If you go with a non embedded CPU motherboard you can even upgrade it to a powerful home server after with an i5 or i7.

If you don't want to bother and want things to just work go with the first option.

 

how is it likely for a raid control to fail? - Depends. But it's super easy to replace with a new one of the same model, it will just read the raid topology and continue from where the old read controller left.

whats better - a raid card or use the onboard raid controller on a motherboard - If you go with raid 5 or 6 it is definitely a raid card. If takes DAYS to rebuild raid 5 on an integrated raid. But double check if the controller has hardware raid 5 acceleration.

which raid config is best for safty? raid 1, 1+0,5,6 -  I would go for 10 or 5, depends if I want more storage (5) or read\write (10) speeds.

Link to comment
Share on other sites

Link to post
Share on other sites

I have a similar question and don't really want to start a whole new topic. I've been looking to get a NAS + Plex Server for a while, and previously I've been looking at a dumb NAS + mac mini/nuc but I've found:

  • dell poweredge 1950 2x dual core, 6x 3.5inch hotswap (~100€)
  • dell poweredge 1950 2x QUAD core, 6x 3.5inch hotswap (~150€)
  • hp proliant dl380 g5 , 8x 2.5inch hotswap sloten. (~200€)

Since I don't really know anything about servers, would getting any of these work, and if yes how would I go about it? Those are if I understand correctly xeons, do I just put something like FreeNAS on it? What about the Plex-server part, would the CPUs be enough to transcode 1080p?

    
Also would a Synology DS1813+ with 4x 4TB with for 850€ be worth it?

“I like being alone. I have control over my own shit. Therefore, in order to win me over, your presence has to feel better than my solitude. You're not competing with another person, you are competing with my comfort zones.”  - portfolio - twitter - instagram - youtube

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, ElfenSky said:

I have a similar question and don't really want to start a whole new topic. I've been looking to get a NAS + Plex Server for a while, and previously I've been looking at a dumb NAS + mac mini/nuc but I've found:

  • dell poweredge 1950 2x dual core, 6x 3.5inch hotswap (~100€)
  • dell poweredge 1950 2x QUAD core, 6x 3.5inch hotswap (~150€)
  • hp proliant dl380 g5 , 8x 2.5inch hotswap sloten. (~200€)

Since I don't really know anything about servers, would getting any of these work, and if yes how would I go about it? Those are if I understand correctly xeons, do I just put something like FreeNAS on it? What about the Plex-server part, would the CPUs be enough to transcode 1080p?

    
Also would a Synology DS1813+ with 4x 4TB with for 850€ be worth it?

 

These are super old servers, I get that they are cheap but they are power hungry and most likely very loud. You should go with something less old, with xeon x56** or better. And look for a tower server if you don't have rack at home.

For the system you can use anything you like, as long at it works for you. But you will probably have driver issues if you try to install windows 7 or 10 on these server. And you need a Windows pro version to run on a dual socket server.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Artyomka said:

 

These are super old servers, I get that they are cheap but they are power hungry and most likely very loud. You should go with something less old, with xeon x56** or better. And look for a tower server if you don't have rack at home.

For the system you can use anything you like, as long at it works for you. But you will probably have driver issues if you try to install windows 7 or 10 on these server. And you need a Windows pro version to run on a dual socket server.

I have a rack (it's in the network room of my dorm, holding the gigabit switch to give everyone ethernet),

but reading what you says, I guess the issues arent worth it.

Perhaps getting a NAS that can run PMS and just convert everything into MP4s so I can Direct Stream it might be the better solution.

“I like being alone. I have control over my own shit. Therefore, in order to win me over, your presence has to feel better than my solitude. You're not competing with another person, you are competing with my comfort zones.”  - portfolio - twitter - instagram - youtube

Link to comment
Share on other sites

Link to post
Share on other sites

The first thing you need to ask yourself is what do you need it to do? I heard plex mentioned....

 

Are you going to be transcoding multiple videos at one time? how many people will be streaming off your plex server at any given time?

 

I currently run a gigabyte brix A8 processor as my plex server/sabnzb server. Not ideal, but it works great for transcoding one video for remote stream and for watching movies at home. All of my data is stored on a Drobo5n which has worked great for the past years and never had an issue with the drobo. 

 

What I like about the brix is that it is form factor so I just have it mounted on the back of my monitor and use VNC to access it from my gaming computer to manage anything I need. 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the (quick) replies!

 

I see all of you chose option #2 -> build a dedicated computer with low end CPU etc.

 

why not option #3? (buy a new computer for myself and use it for the raid)

plus what about the questions I asked.

(

how is it likely for a raid control to fail?

whats better - a raid card or use the onboard raid controller on a motherboard

which raid config is best for safty? raid 1, 1+0,5,6

)

 

I want to understand it better and make the right decision.

 

thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

Simplest solution?

QNAP or Synology NAS and load it with drives. Pair it with a Skull Canyon NUC and it's off to the races.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Death Whisperer said:

how is it likely for a raid control to fail?

whats better - a raid card or use the onboard raid controller on a motherboard

which raid config is best for safty? raid 1, 1+0,5,6

Why exactly do you want raid?

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Death Whisperer said:

Hi all!

 

I'm interesting in buying a storage server.

I just need it to storage files. not something fancy though I want it to be safe.

I have at least 3 options and its hard for me to decide, I need more info.

first option:

Buy off the shelf product such as the Western Digital My Cloud EX4100 Expert Series 4-Bay WDBWZE0000NBK

fill it with 4/2 HDs: Western Digital Red 6TB 64MB Sata III WD60EFRX

Second Option:

Build my own server, probably gonna cost more but it comes with more upgrade options.

third options:

My PC is old. I have the first i5 generation so at least 7 years old?

since im gaming on consoles it doesn't really bather me.

but I was thinking about buying a new PC and config a raid inside it.

 

A question along the way:

how is it likely for a raid control to fail?

whats better - a raid card or use the onboard raid controller on a motherboard

which raid config is best for safty? raid 1, 1+0,5,6

 

Thank you in advance!

Death.

 

ps

I'm buying for either a local store or amazon, so if you want to give me a link of anything amazon would be great.

 

I would recommend ECC RAM particularly for a storage server.

 

How likely it is for a RAID controller to fail depends to some extend on what card you get.

 

Never ever use on-board RAID when it´s fake-raid and/or when you are not sure that you can get a replacement that will be able to read your data.

 

RAID-6 is better than RAID-5, and RAID-1 is theoretically better than RAID-5.  RAID-1+0 is redundancy plus striping.

 

The WD Reds are pretty good disks for the money, though a bit on the slow side.  Last time I checked (which was recently), 4TB were most cost efficient than 6TB, but you might want to do some math to see how that works out for you when you factor in the power consumption.

 

You can use mdraid, which is software RAID.  It comes at a performance penalty but costs nothing.  You can use ZFS, which is also a kind of software RAID, also costs nothing and does block level checksumming to keep your data safe.  Unfortunately, ZFS has poor performance and remains an alien in that it isn´t integrated into the kernel.

 

You can use a RAID controller.  If you do, make sure it has a BBU.  They can be rather expensive, and you want to verify that the model you´re looking at does support disks larger than 2TB.  Never use SSDs that aren´t specifically designed for this application with a RAID controller.

 

You can use a cheap RAID controller which is unlikely to fail and (currently) easy to replace by getting an HP Smart Array P410.  If you get one, make sure you get 1GB cache and a BBU or a capacitor with it.  With current firmware (which unfortunately can be difficult to get), they do support disks larger than 2TB --- I know for sure that 4TB Reds work on them.  If you have a board with one of these retarded EFI BIOSes, you may or may not get the P410 to work.  If it doesn´t work, you need to disable loading/executing the ROM of the controller in the BIOS, and if it does then work, you can not boot from the controller, i. e. you need extra boot disks.  IIRC, each P410 needs an 8x slot, or better.  You can connect up to 8 SAS or SATA disks to one.

 

Considering that the firmware can be difficult to get, it´s questionable whether to recommend them or not.  I advise against LSI cards from Dell because the two I tried both didn´t work at all with the same hardware the P410 work with.

 

Of course you can turn your old computer into a storage server.  If you want to use software RAID or ZFS, you´d only have to buy the disks.

 

 

Other than that, buying a 19" server for storage is difficult, to say the least.  Anything half-way recent uses 2.5" disks, and look what you´d pay for a 4TB 2.5" disk.  Older ones require too much power and are, by far, too loud.

 

If you want something like that, you need to find a Chenbro case for a reasonable price (i. e. used) that has either 16 or 24 3.5" bays and start with that.  Make sure the caddys are included when you buy one, and you also need the screws for them.  If it comes with a redundant PSU, the PSU will be loud, and you probably want to switch it out, unless you don´t mind loud.  If you do mind loud, you also need to do something about the seven 80mm fans that are in the case and run at full speed all the time.  Even if you don´t mind loud, the whole thing is more like deafening than loud unless you do something about it. You do need fans because 16 disks turn about 160W into heat.  You can get it reasonably quiet, though, probably even very quiet if you´re willing to spend a fortune on fans.  Even if you use hardware raid with two P410s, it is much cheaper and probably way better than wasting 850 on a pre-built NAS.

 

I almost forgot:  The P410 requires cooling, like a fan blowing at the rear of the card where it has some sort of heatsink.  It doesn´t need much, but without some airflow, they get very hot (and probably fail sooner or later).

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, heimdali said:

The WD Reds are pretty good disks for the money, though a bit on the slow side.  Last time I checked (which was recently), 4TB were most cost efficient than 6TB, but you might want to do some math to see how that works out for you when you factor in the power consumption.

You can find 8TB WD Reds in the enclosures and shuck them for the best bang for the buck when you find them on sale but I think their on to it:

 

 

I just finished an unRAID server and has been a very good experience I like the ability to expand the amount of drives as needed in the future over a purpose built NAS like QNAP or similar it also has a good support forum and has the ability to run many VM's if your into that kinda thing. I have Plex running so I have that covered and my current system is a pre built locked i7 6700 and a 6 drive license for unRAID for a total storage of 12TB 3 4TD data 1 4TB parity and 2 240 ssd's for cache the only things I had to purchase when I started were the WD Red drives so it was cheap to get started and expand as needed. It pulls 40-50 watts from the wall as it is currently configured and is at 25c at idle

My daily driver: The Wrath of Red: OS Windows 10 home edition / CPU Ryzen TR4 1950x 3.85GHz / Cooler Master MasterAir MA621P Twin-Tower RGB CPU Air Cooler / PSU Thermaltake Toughpower 750watt / ASRock x399 Taichi / Gskill Flare X 32GB DDR4 3200Mhz / HP 10GB Single Port Mellanox Connectx-2 PCI-E 10GBe NIC / Samsung 512GB 970 pro M.2 / ASUS GeForce GTX 1080 STRIX 8GB / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor x3

 

My technology Rig: The wizard: OS Windows 10 home edition / CPU Ryzen R7 1800x 3.95MHz / Corsair H110i / PSU Thermaltake Toughpower 750watt / ASUS CH 6 / Gskill Flare X 32GB DDR4 3200Mhz / HP 10GB Single Port Mellanox Connectx-2 PCI-E 10GBe NIC / 512GB 960 pro M.2 / ASUS GeForce GTX 1080 STRIX 8GB / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor HP Monitor

 

My I don't use RigOS Windows 10 home edition / CPU Ryzen 1600x 3.85GHz / Cooler Master MasterAir MA620P Twin-Tower RGB CPU Air Cooler / PSU Thermaltake Toughpower 750watt / MSI x370 Gaming Pro Carbon / Gskill Flare X 32GB DDR4 3200Mhz / Samsung PM961 256GB M.2 PCIe Internal SSDEVGA GeForce GTX 1050 Ti SSC GAMING / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor

 

My NAS: The storage miser: OS unRAID v. 6.9.0-beta25 / CPU Intel i7 6700 / Cooler Master MasterWatt Lite 500 Watt 80 Plus / ASUS Maximus viii Hero / 32GB Gskill RipJaw DDR4 3200Mhz / HP Mellanox ConnectX-2 10 GbE PCI-e G2 Dual SFP+ Ported Ethernet HCA NIC / 9 Drives total 29TB - 1 4TB seagate parity - 7 4TB WD Red data - 1 1TB laptop drive data - and 2 240GB Sandisk SSD's cache / Headless

 

Why did I buy this server: OS unRAID v. 6.9.0-beta25 / Dell R710 enterprise server with dual xeon E5530 / 48GB ecc ddr3 / Dell H310 6Gbps SAS HBA w/ LSI 9211-8i P20 IT / 4 450GB sas drives / headless

 

Just another server: OS Proxmox VE / Dell poweredge R410

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, geo3 said:

Why exactly do you want raid?

cause I want to ensure my data is safe plus 1 drive isn't really enough. currently I have 7TB spread over 4 different HDs.

 

13 hours ago, heimdali said:

 

I would recommend ECC RAM particularly for a storage server.

 

How likely it is for a RAID controller to fail depends to some extend on what card you get.

 

Never ever use on-board RAID when it´s fake-raid and/or when you are not sure that you can get a replacement that will be able to read your data.

 

RAID-6 is better than RAID-5, and RAID-1 is theoretically better than RAID-5.  RAID-1+0 is redundancy plus striping.

 

The WD Reds are pretty good disks for the money, though a bit on the slow side.  Last time I checked (which was recently), 4TB were most cost efficient than 6TB, but you might want to do some math to see how that works out for you when you factor in the power consumption.

 

You can use mdraid, which is software RAID.  It comes at a performance penalty but costs nothing.  You can use ZFS, which is also a kind of software RAID, also costs nothing and does block level checksumming to keep your data safe.  Unfortunately, ZFS has poor performance and remains an alien in that it isn´t integrated into the kernel.

 

You can use a RAID controller.  If you do, make sure it has a BBU.  They can be rather expensive, and you want to verify that the model you´re looking at does support disks larger than 2TB.  Never use SSDs that aren´t specifically designed for this application with a RAID controller.

 

You can use a cheap RAID controller which is unlikely to fail and (currently) easy to replace by getting an HP Smart Array P410.  If you get one, make sure you get 1GB cache and a BBU or a capacitor with it.  With current firmware (which unfortunately can be difficult to get), they do support disks larger than 2TB --- I know for sure that 4TB Reds work on them.  If you have a board with one of these retarded EFI BIOSes, you may or may not get the P410 to work.  If it doesn´t work, you need to disable loading/executing the ROM of the controller in the BIOS, and if it does then work, you can not boot from the controller, i. e. you need extra boot disks.  IIRC, each P410 needs an 8x slot, or better.  You can connect up to 8 SAS or SATA disks to one.

 

Considering that the firmware can be difficult to get, it´s questionable whether to recommend them or not.  I advise against LSI cards from Dell because the two I tried both didn´t work at all with the same hardware the P410 work with.

 

Of course you can turn your old computer into a storage server.  If you want to use software RAID or ZFS, you´d only have to buy the disks.

 

 

Other than that, buying a 19" server for storage is difficult, to say the least.  Anything half-way recent uses 2.5" disks, and look what you´d pay for a 4TB 2.5" disk.  Older ones require too much power and are, by far, too loud.

 

If you want something like that, you need to find a Chenbro case for a reasonable price (i. e. used) that has either 16 or 24 3.5" bays and start with that.  Make sure the caddys are included when you buy one, and you also need the screws for them.  If it comes with a redundant PSU, the PSU will be loud, and you probably want to switch it out, unless you don´t mind loud.  If you do mind loud, you also need to do something about the seven 80mm fans that are in the case and run at full speed all the time.  Even if you don´t mind loud, the whole thing is more like deafening than loud unless you do something about it. You do need fans because 16 disks turn about 160W into heat.  You can get it reasonably quiet, though, probably even very quiet if you´re willing to spend a fortune on fans.  Even if you use hardware raid with two P410s, it is much cheaper and probably way better than wasting 850 on a pre-built NAS.

 

I almost forgot:  The P410 requires cooling, like a fan blowing at the rear of the card where it has some sort of heatsink.  It doesn´t need much, but without some airflow, they get very hot (and probably fail sooner or later).

 

Thanks for the detailed reply!

I've looked for P410 cards on amazon and ebay, not too expensive, also how do i connect it to HDs. I suppose i'll need a special cable that connects to the P410 and to the HDs on the other end?

 

my PC:

I5 760

Gigabyte GA-P55-UD3L (sata2 / 3GB\s)

GTX 560 TI

8GB of ram

will it be enough for anything?

 

I think I like the idea of the SW raid. I've googled it and ill keep reading about it. maybe do a little experiment with 2 1TB HDs just for the experience.

 

I've picked 6TB red HDs cause I cant get a bigger than that locally and it gives the best value per price here.

for start I'll buy either 2 or 4 of these. 4 6TBs HDs cost 1203 USD.

 

If i buy 2 now, config them in raid 1, will i be able to read them on other computers as a regular HD? the raid controller\SW just copy automatically data fro, one to another? 

I think before going full power with it maybe i should buy 2 now' gain some experience with Raid configs and also that way, I space a bit theier stating day so its unlikely they fail on the same time. hopefully.

 

mrbilky

which model of QNAP did u buy?  I've searched for them and 1 machine with 6 slots (empty) costs over 1000 USD

 

Thanks!

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Death Whisperer said:

cause I want to ensure my data is safe

Are you running backups? Backups make your data safe. RAID not so much. RAID will only protect from physical drive failure and reduce down time.  Backups protect from a much wider array for problems. RAID is the icing on the data integrity cake. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Death Whisperer said:

which model of QNAP did u buy?  I've searched for them and 1 machine with 6 slots (empty) costs over 1000 USD

4 and 5 bay NAS's are much cheaper, 6+ bay options jump in price a lot.

 

I'd say use your old i5 computer, you already have it and it'll do everything you need with no upgrades beyond HDDs.

 

I personally wouldn't use ZFS for such a small setup either since HDD expansions are not as easy as other options, with ZFS you can't add HDDs to existing vdevs so you won't be popping in 1 HDD to expand your storage since that's just not how ZFS works.

 

You've really got two fundamental options to pick from, software RAID or hardware RAID.

 

Software RAID:

  • Windows + Storage Spaces/DriverBender/FexlRAID etc
  • Linux + mdadm/BTRFS/ZFS(know it's limitations and nuances before going in to ZFS)

Software RAID has the benefit of not requiring any extra hardware until you run out of SATA ports, you can get more with an HBA (not a RAID card, important). Performance varies a lot depending on which software RAID you are using.

 

Hardware RAID

  • Windows, nothing special just format the logical disk that it sees from the RAID card
  • Linux, same as above but good idea to combine with LVM

Hardware RAID requires that you buy a RAID card and it's only worth it if you also get the BBU/Flash cache module for it so you get write-back cache so RAID 5/6 performance is good, it's actually very good now days 500MB/s-800MB/s write even with only a few disks. If you have less than 6 disks use RAID 5 and once you go over that use online RAID level migration to convert the array to RAID 6, you're not always locked in to a configuration but not everything can be changed afterwards.

 

I would like to note that ECC Unregistered RAM isn't actually worth it, it has very limit ability to correct errors and that is what most low end systems use. This is one of the reasons why off the shelf NAS's from QNAP/Synology don't bother to use it when the CPU supports it, only fully featured Xeons support ECC Registered which is the actually useful ECC RAM.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, heimdali said:

I almost forgot:  The P410 requires cooling, like a fan blowing at the rear of the card where it has some sort of heatsink.  It doesn´t need much, but without some airflow, they get very hot (and probably fail sooner or later).

Yea they get really hot, too hot to touch if not properly cooled. 10Gb cards fall in to this warning too.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Death Whisperer said:

 which model of QNAP did u buy?  I've searched for them and 1 machine with 6 slots (empty) costs over 1000 USD

 

Thanks!

 

 

 

Nope none I used a pre built desktop pc that is why I like a purpose built unit which can be expanded when needed up over 24 drives over a QNAP or any other similar unit I used that for a while but I moved the hardware i.e. ram, CPU over to a new mobo and case that supported more room

My daily driver: The Wrath of Red: OS Windows 10 home edition / CPU Ryzen TR4 1950x 3.85GHz / Cooler Master MasterAir MA621P Twin-Tower RGB CPU Air Cooler / PSU Thermaltake Toughpower 750watt / ASRock x399 Taichi / Gskill Flare X 32GB DDR4 3200Mhz / HP 10GB Single Port Mellanox Connectx-2 PCI-E 10GBe NIC / Samsung 512GB 970 pro M.2 / ASUS GeForce GTX 1080 STRIX 8GB / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor x3

 

My technology Rig: The wizard: OS Windows 10 home edition / CPU Ryzen R7 1800x 3.95MHz / Corsair H110i / PSU Thermaltake Toughpower 750watt / ASUS CH 6 / Gskill Flare X 32GB DDR4 3200Mhz / HP 10GB Single Port Mellanox Connectx-2 PCI-E 10GBe NIC / 512GB 960 pro M.2 / ASUS GeForce GTX 1080 STRIX 8GB / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor HP Monitor

 

My I don't use RigOS Windows 10 home edition / CPU Ryzen 1600x 3.85GHz / Cooler Master MasterAir MA620P Twin-Tower RGB CPU Air Cooler / PSU Thermaltake Toughpower 750watt / MSI x370 Gaming Pro Carbon / Gskill Flare X 32GB DDR4 3200Mhz / Samsung PM961 256GB M.2 PCIe Internal SSDEVGA GeForce GTX 1050 Ti SSC GAMING / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor

 

My NAS: The storage miser: OS unRAID v. 6.9.0-beta25 / CPU Intel i7 6700 / Cooler Master MasterWatt Lite 500 Watt 80 Plus / ASUS Maximus viii Hero / 32GB Gskill RipJaw DDR4 3200Mhz / HP Mellanox ConnectX-2 10 GbE PCI-e G2 Dual SFP+ Ported Ethernet HCA NIC / 9 Drives total 29TB - 1 4TB seagate parity - 7 4TB WD Red data - 1 1TB laptop drive data - and 2 240GB Sandisk SSD's cache / Headless

 

Why did I buy this server: OS unRAID v. 6.9.0-beta25 / Dell R710 enterprise server with dual xeon E5530 / 48GB ecc ddr3 / Dell H310 6Gbps SAS HBA w/ LSI 9211-8i P20 IT / 4 450GB sas drives / headless

 

Just another server: OS Proxmox VE / Dell poweredge R410

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Death Whisperer said:

cause I want to ensure my data is safe plus 1 drive isn't really enough. currently I have 7TB spread over 4 different HDs.

 

Thanks for the detailed reply!

I've looked for P410 cards on amazon and ebay, not too expensive, also how do i connect it to HDs. I suppose i'll need a special cable that connects to the P410 and to the HDs on the other end?

 

my PC:

I5 760

Gigabyte GA-P55-UD3L (sata2 / 3GB\s)

GTX 560 TI

8GB of ram

will it be enough for anything?

 

I think I like the idea of the SW raid. I've googled it and ill keep reading about it. maybe do a little experiment with 2 1TB HDs just for the experience.

 

I've picked 6TB red HDs cause I cant get a bigger than that locally and it gives the best value per price here.

for start I'll buy either 2 or 4 of these. 4 6TBs HDs cost 1203 USD.

 

If i buy 2 now, config them in raid 1, will i be able to read them on other computers as a regular HD? the raid controller\SW just copy automatically data fro, one to another? 

I think before going full power with it maybe i should buy 2 now' gain some experience with Raid configs and also that way, I space a bit theier stating day so its unlikely they fail on the same time. hopefully.

 

mrbilky

which model of QNAP did u buy?  I've searched for them and 1 machine with 6 slots (empty) costs over 1000 USD

 

Thanks!

 

 

 

If you want your data to be safe, there is no way around RAID and several generations of backups, some of which should be offsite.  You can use RAID-0 for backups.  If you don´t mind giving your data out of hands, you could use a service like backblaze offers, but I wouldn´t do that.

 

Disks do fail, the only question is when.  I never store data on a single disk only --- and even if the data can be replaced, I use RAID because it´s not worth the hassle you get into when the disk fails not to use RAID.

 

Using the computer you have and trying out software RAID sounds like a good plan.  You may even be happy with the performance and/or might not mind the performance penalty you get from it.

 

$1200 for 4x6TB?  That effectively gives you about 11TB in a RAID-1+0.  A WD40EFRX costs €138 here. 8 of them in a RAID-1+0 (with hardware RAID) give you 15TB, better performance and would cost about the same.  Just keep in mind that the more disks you have, the more can fail, and more disks need more power.

 

Hard- and software RAID do a bit more than simply copying the data over, so you can´t read a disk taken out of a RAID without.  Hardware RAID controllers also perform consistency checking all by themselves in the background; mdraid can (and usually is) configured to do that at some time.  Apparently it does that by resyncing the RAID (I don´t know for sure), which can make the machine somewhat unusable under certain conditions.  However, you can adjust the rate at which a resync is being performed, and setting it low enough can work around the problem.

 

As to performance, there are basically two factors:  How much data you can read/write in a given amount of time to/from your storage system, and latency.  Depending on the application, either of them, or both, are relevant.  With storage over 1GB network, you get about 100MB/sec.  Either your storage system can deliver that or not; if it delivers more, you need more bandwidth to get more, provided that the sending/receiving machine can also handle that rate.  However, latency can be far more crucial (and tends to be); it translates into IOOPS.  Your disks can be as fast they want in terms of data rate and will not be able to keep up when they don´t manage enough IOOPS.  If you use the storage for backups for a single machine, IOOPS don´t really matter; use it for several machines at the same time, and they start to matter.  If you run any kind of server, like database, file, web, IP cameras etc., even for a few users, IOOPS do matter.  A lack of IOOPS translates into waitstates, meaning your CPU is waiting for I/O to finish.  2% waitstates is already annoying, 5% is awful; anything above that and you know you´ll have to wait on things because your storage system kinda sucks.  Peak performance is pretty much irrelevant unless you can make things so that you can always have it; sustaining 100MB/s constantly and indefinitely is a different thing as well as constantly and indefinitely sustaining a high number of IOOPS.  You´ll figure that out when you start to experiment :)

 

You do need a special --- or better, a suitable one --- cable to connect to a P410 (or other cards).  The P410 has two connectors usually known as mini-SAS connectors.  Each of them handles four drives.  You can get various cables to connect to different typs of SAS connectors, like cables to connect to a backplane.  You can also get a cable with a mini-SAS connector on one end that spreads out into four SATA connectors with which you could connect four individual SATA (or SAS) disks.  SATA connectors on disks are different from SAS connectors, so if you get anything with a backplane, make sure the backplane fits both.  If you get a P410 or the like, you may want to get SAS disks for the system to boot from because used 2.5" 72GB 15k RPM SAS disks are dirt cheap (like $10--15) and blazingly fast.

 

If you want to venture into btrfs, you can easily use the various disks you already have because btrfs somehow handles differently sized disks nicely in a RAID-1 like manner.  If you stay with btrfs RAID-1, it should be ok for backups.  Btrfs is not considered production ready, and Redhat has, amazingly, decided to no longer support --- i. e. to even entirely remove --- it, starting with RHEL 8.  That should tell you something ...

 

Stay away from LVM unless you really really can´t find some way around it.  It works, but it´s a nightmare.  The only thing I /might/ use it for is creating a VG over several RAIDs, and only because it would be the most suitable way for creating a large volume that comes to mind.  You can probably do mdraid with LVM volumes, though I´d do that only for experimentation.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, heimdali said:

Btrfs is not considered production ready, and Redhat has, amazingly, decided to no longer support --- i. e. to even entirely remove --- it, starting with RHEL 8.  That should tell you something ..

BTRFS is much better now, that whole not ready stuff is rather out dated. The sticking point most people had with it was the lack of ability to run fsck to repair the filesystem if it got corrupted for some reason but now you can do that.

 

Quote

btrfs check --repair

 

I wouldn't read too far in to what Red Hat has done, they have their own plans going on and btrfs may just not fit in with that so no need to support it anymore.

 

Quote

In summer 2012, several Linux distributions moved Btrfs from experimental to production or supported status.[27][28] In 2015, Btrfs was adopted as the default filesystem for SUSE Linux Enterprise Server 12

 

Ceph was also in the process of transitioning to using btrfs as it's recommended and default fileystem but they have actually dediced to not use any kernel level filesystem at all and to use disks as raw devices and handle everything itself, they call it BlueStore and the old way is FlieStore (XFS or btrfs).

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, heimdali said:

Hard- and software RAID do a bit more than simply copying the data over, so you can´t read a disk taken out of a RAID without.  Hardware RAID controllers also perform consistency checking all by themselves in the background; mdraid can (and usually is) configured to do that at some time.  Apparently it does that by resyncing the RAID (I don´t know for sure), which can make the machine somewhat unusable under certain conditions.  However, you can adjust the rate at which a resync is being performed, and setting it low enough can work around the problem.

 

As to performance, there are basically two factors:  How much data you can read/write in a given amount of time to/from your storage system, and latency.  Depending on the application, either of them, or both, are relevant.  With storage over 1GB network, you get about 100MB/sec.  Either your storage system can deliver that or not; if it delivers more, you need more bandwidth to get more, provided that the sending/receiving machine can also handle that rate.  However, latency can be far more crucial (and tends to be); it translates into IOOPS.  Your disks can be as fast they want in terms of data rate and will not be able to keep up when they don´t manage enough IOOPS.  If you use the storage for backups for a single machine, IOOPS don´t really matter; use it for several machines at the same time, and they start to matter.  If you run any kind of server, like database, file, web, IP cameras etc., even for a few users, IOOPS do matter.  A lack of IOOPS translates into waitstates, meaning your CPU is waiting for I/O to finish.  2% waitstates is already annoying, 5% is awful; anything above that and you know you´ll have to wait on things because your storage system kinda sucks.  Peak performance is pretty much irrelevant unless you can make things so that you can always have it; sustaining 100MB/s constantly and indefinitely is a different thing as well as constantly and indefinitely sustaining a high number of IOOPS.  You´ll figure that out when you start to experiment :)

 

 

 

Ah yes but the beauty of unRAID (not sure about FreeNAS) is that you have a cache disk (in my case 2 240GB ssd's) files are written to that and the mover will off load during off peak times set by you maintaining higher read/writes in the process

My daily driver: The Wrath of Red: OS Windows 10 home edition / CPU Ryzen TR4 1950x 3.85GHz / Cooler Master MasterAir MA621P Twin-Tower RGB CPU Air Cooler / PSU Thermaltake Toughpower 750watt / ASRock x399 Taichi / Gskill Flare X 32GB DDR4 3200Mhz / HP 10GB Single Port Mellanox Connectx-2 PCI-E 10GBe NIC / Samsung 512GB 970 pro M.2 / ASUS GeForce GTX 1080 STRIX 8GB / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor x3

 

My technology Rig: The wizard: OS Windows 10 home edition / CPU Ryzen R7 1800x 3.95MHz / Corsair H110i / PSU Thermaltake Toughpower 750watt / ASUS CH 6 / Gskill Flare X 32GB DDR4 3200Mhz / HP 10GB Single Port Mellanox Connectx-2 PCI-E 10GBe NIC / 512GB 960 pro M.2 / ASUS GeForce GTX 1080 STRIX 8GB / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor HP Monitor

 

My I don't use RigOS Windows 10 home edition / CPU Ryzen 1600x 3.85GHz / Cooler Master MasterAir MA620P Twin-Tower RGB CPU Air Cooler / PSU Thermaltake Toughpower 750watt / MSI x370 Gaming Pro Carbon / Gskill Flare X 32GB DDR4 3200Mhz / Samsung PM961 256GB M.2 PCIe Internal SSDEVGA GeForce GTX 1050 Ti SSC GAMING / Acer - H236HLbid 23.0" 1920x1080 60Hz Monitor

 

My NAS: The storage miser: OS unRAID v. 6.9.0-beta25 / CPU Intel i7 6700 / Cooler Master MasterWatt Lite 500 Watt 80 Plus / ASUS Maximus viii Hero / 32GB Gskill RipJaw DDR4 3200Mhz / HP Mellanox ConnectX-2 10 GbE PCI-e G2 Dual SFP+ Ported Ethernet HCA NIC / 9 Drives total 29TB - 1 4TB seagate parity - 7 4TB WD Red data - 1 1TB laptop drive data - and 2 240GB Sandisk SSD's cache / Headless

 

Why did I buy this server: OS unRAID v. 6.9.0-beta25 / Dell R710 enterprise server with dual xeon E5530 / 48GB ecc ddr3 / Dell H310 6Gbps SAS HBA w/ LSI 9211-8i P20 IT / 4 450GB sas drives / headless

 

Just another server: OS Proxmox VE / Dell poweredge R410

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

BTRFS is much better now, that whole not ready stuff is rather out dated. The sticking point most people had with it was the lack of ability to run fsck to repair the filesystem if it got corrupted for some reason but now you can do that.

 

 

I wouldn't read too far in to what Red Hat has done, they have their own plans going on and btrfs may just not fit in with that so no need to support it anymore.

 

 

Ceph was also in the process of transitioning to using btrfs as it's recommended and default fileystem but they have actually dediced to not use any kernel level filesystem at all and to use disks as raw devices and handle everything itself, they call it BlueStore and the old way is FlieStore (XFS or btrfs).

They have a status page for btrfs: https://btrfs.wiki.kernel.org/index.php/Status

IIRC, it looked somewhat worse when I looked at it a couple weeks ago.  However, it requires you to use a very recent or the latest kernel, which may or may not be an option.

 

So far, I´ve used btrfs a little and never had any issues.  I don´t entrust it with my data, though.

 

I wonder why Redhad decided to drop it; it doesn´t make sense to me, and it seems like a very bad decision because it provides features for which there is no alternative but ZFS, which has its own disadvantages.  I find it silly though that btrfs doesn´t support swap.  That it doesn´t forces you to use mdraid for swap, and it´s awkward having to use multiple kinds of software raid at the same time.

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, heimdali said:

  I find it silly though that btrfs doesn´t support swap

What do you mean doesn't support swap? If you mean drive replacement it does, it'll get stuck if the disk has bad sectors but so will any other tradition RAID. You just fail the disk completely though and replace then rebuild to get around that. Replace is just a clean swap.

 

22 minutes ago, heimdali said:

However, it requires you to use a very recent or the latest kernel, which may or may not be an option.

Core functions have been stable for a while which is why SUSE switched to it as it's default. It's the extras like Dedup and compression that's been for what ever reason struggling to get rock solid.

 

Interestingly enough I just saw Google is looking at using btrfs for android while looking at this today.

https://www.theregister.co.uk/2017/08/16/red_hat_banishes_btrfs_from_rhel/

 

Edit:

Oh right think you mean using btrfs for swap partition, wasn't in your link :P

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mrbilky said:

Ah yes but the beauty of unRAID (not sure about FreeNAS) is that you have a cache disk (in my case 2 240GB ssd's) files are written to that and the mover will off load during off peak times set by you maintaining higher read/writes in the process

I never tried unraid, so I can´t say anything about that.  There seem to be raid controllers that can use SSDs for cache.  I doubt the P410 can do that, but I haven´t tried.

 

In practise, I did a test with an application to compare performance with btrfs on two SSDs and xfs on a hardware raid-1+0 of eight WD 4TB Reds on the same machine because it turned out that the application greatly benefited from having the data on SSDs (which I happened to use with another machine which is being moved around a lot so that SSDs seemed a good choice).  Application performance was the same, and that is even with the cache of the raid controller at 100% read because it doesn´t have a BBU yet.

 

I wouldn´t expect any improvement from using two SSDs as cache in this situation unless the array would be somewhat heavy utilized, which it won´t be.  Performance is probably better than what only two SSDs can deliver --- unless they were expensive enterprise SSDs which can actually sustain a constant load which would be overkill --- and will get significantly better when I put the BBUs in.

 

Now ask yourself what is more reliable and will give better overall performance:  btrfs on two consumer SSDs or xfs on the hardware raid, running Centos.  Where would you put the data?

 

What does unraid do when the cache disk fails before the data has been stored securely?

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, heimdali said:

I never tried unraid, so I can´t say anything about that.  There seem to be raid controllers that can use SSDs for cache.

UnRAID is a software product, it's a hypervisor (KVM) and NAS combo. It's sort of like Proxmox.

 

Edit:

9 minutes ago, heimdali said:

I wouldn´t expect any improvement from using two SSDs as cache in this situation unless the array would be somewhat heavy utilized, which it won´t be.  Performance is probably better than what only two SSDs can deliver --- unless they were expensive enterprise SSDs which can actually sustain a constant load which would be overkill --- and will get significantly better when I put the BBUs in

UnRAID really does benefit from cache devices. You'll have to forgive my lack or understanding about how UnRAID works because I don't use it (and wouldn't want to) but it uses a dedicated parity device and is more file based than block based so that is why performance is so much better with a cache device.

 

I'm really not a fan of UnRAID.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, leadeater said:

What do you mean doesn't support swap? If you mean drive replacement it does, it'll get stuck if the disk has bad sectors but so will any other tradition RAID. You just fail the disk completely though and replace then rebuild to get around that. Replace is just a clean swap.

 

Core functions have been stable for a while which is why SUSE switched to it as it's default. It's the extras like Dedup and compression that's been for what ever reason struggling to get rock solid.

 

Interestingly enough I just saw Google is looking at using btrfs for android while looking at this today.

https://www.theregister.co.uk/2017/08/16/red_hat_banishes_btrfs_from_rhel/

 

Edit:

Oh right think you mean using btrfs for swap partition, wasn't in your link :P

Well, yes, not really partition, I guess.  By default, I install the system on at least two disks in a raid1, and if I were to use btrfs, then what about swap?

 

I can use more disks in some raid setup for a swap partition.  That requires more disks, entirely dedicated for swap.

 

I can partition the disks I install on and use mdraid to make a swap partition.  That is awkward in two ways because I need to run yet another raid software, and I need to partition the disks whereas btrfs is meant to use whole devices, which is much nicer.  (I usually don´t partition drives unless I do want partitions and just create whatever FS on the whole device.  There is no point in having partitions when you don´t need them.  I also do not partition drives in a software raid but partition the logical volume made from the whole devices if I want partitions: if I want to change partitioning some time later, I can do that more easily when the logical volume is partitioned because the raid doesn´t need to be changed.)

 

I could use a swap file, but btrfs does not support swap files.  You also can´t use the raid features of btrfs to create a swap partition.

 

Whatever option you use, it´s just bad.  That makes btrfs rather awkward.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×