Jump to content

Server Build

8 minutes ago, KarathKasun said:

Hardware RAID has no filesystem requirements and no provisions for filesystem level redundancy or load balancing.  It is filesystem agnostic.

 

SS REQUIRES NTFS or ReFS.

Okay? So you ARE referring to hardware controllers. Like I said...RAID is just putting the disks together. Hardware RAID controllers are limited by the software they run and old controllers suck, you're correct there. Software RAID that I refer to is something controlled by an OS. Obviously our definitions of RAID don't line up.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Razor Blade said:

Okay? So you ARE referring to hardware controllers. Like I said...RAID is just putting the disks together. Hardware RAID controllers are limited by the software they run and old controllers suck, you're correct there. Software RAID that I refer to is something controlled by the OS. Obviously our definitions of RAID don't line up.

Even software RAID is filesystem agnostic.  Old Windows software RAID could use any Windows supported filesystem.

 

MD RAID is filesystem agnostic as well.

 

SS stores volume map data in the FS, making it not RAID.  Just like ZFS/GlusterFS.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, KarathKasun said:

Even software RAID is filesystem agnostic.  Old Windows software RAID could use any Windows supported filesystem.

It isn't though (maybe not anymore I don't know anything about old windows RAID). The file system you choose when you put disks in an array via the OS is dependent on the OS you're running.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Razor Blade said:

It isn't though (maybe not anymore I don't know anything about old windows RAID). The file system you choose when you put disks in an array via the OS is dependent on the OS you're running.

No its not.  There are plugins for the Windows kernel to add filesystem support, like EXT2/3/4.  You could use these with the old Soft RAID system.

 

Actually, you could pass through an old SoftRAID volume to a VM and put any FS you wanted onto it.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, KarathKasun said:

No its not.  There are plugins for the Windows kernel to add filesystem support, like EXT2/3/4.  You could use these with the old Soft RAID system.

 

Actually, you could pass through an old SoftRAID volume to a VM and put any FS you wanted onto it.

You're talking about Virtual Disks? As far as I've read, Storage Spaces made it's way into Windows 8 and on and is just the replacement to the legacy software RAID that Windows 7 offered (which from what I've also read had data loss problems). You can still create Virtual Disks and format them to whatever you want to in your VM while the data lives on the drives managed by Windows. But that doesn't sound like it would be relevant to what the OP wants.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Razor Blade said:

You're talking about Virtual Disks? As far as I've read, Storage Spaces made it's way into Windows 8 and on and is just the replacement to the legacy software RAID that Windows 7 offered (which from what I've also read had data loss problems). You can still create Virtual Disks and format them to whatever you want to in your VM while the data lives on the drives managed by Windows. But that doesn't sound like it would be relevant to what the OP wants.

That's because W8/10 desktop only provides Soft RAID just like W7 and earlier.  SS on a server is not the same system, it is a combined volume manager and file system (just like ZFS).

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, KarathKasun said:

That's because W8/10 desktop only provides Soft RAID.  SS on a server is not the same system, it is a combined volume manager and file system.

Sorry I don't know what you're talking about. The only thing that comes up when I search for "Soft RAID" is a software package from Other World Computing. I know Wikipedia isn't a great source but it is fast and convenient.

 

https://en.wikipedia.org/wiki/RAID

 

https://en.wikipedia.org/wiki/Storage_Spaces

 

Again. My definition of Software RAID is the OS puts the disks together in an array and manages them. My definition of Hardware RAID is there is a hardware controller that puts the disks together and presents a virtual disk to the OS. The OS is none the wiser and the limitation is determined by the firmware on the card. Nothing to do with file systems.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Razor Blade said:

Sorry I don't know what you're talking about. The only thing that comes up when I search for "Soft RAID" is a software package from Other World Computing. I know Wikipedia isn't a great source but it is fast and convenient.

 

https://en.wikipedia.org/wiki/RAID

 

https://en.wikipedia.org/wiki/Storage_Spaces

 

Again. My definition of Software RAID is the OS puts the disks together in an array and manages them. My definition of Hardware RAID is there is a hardware controller that puts the disks together and presents a virtual disk to the OS. The OS is none the wiser and the limitation is determined by the firmware on the card. Nothing to do with file systems.

Software RAID is where the volume management is handled by the CPU.  In Hardware RAID the storage controller has its own CPU for parity calculation and sector level data distribution.  Its the same as the difference between software 3D rendering and hardware 3D rendering.

 

OS based RAID (volume only management) falls into software RAID.  RAID on most consumer boards is the same, and this is why it is dependent on a driver for proper functionality.  Most modern servers also use this approach.

 

In SS (on server setups), ZFS, and GlusterFS the splitting of data is done on the cluster level instead of the sector level.  This allows more robust error detection as well as very easy and fast expansion/shrinkage of volume size with the system online.  SS, ZFS, and GlusterFS can also pool disks from different systems together over a WAN/LAN/SAN where RAID can not (unless you want to create an unreliable and slow storage array).

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, KarathKasun said:

Software RAID is where the volume management is handled by the CPU.  In Hardware RAID the storage controller has its own CPU for parity calculation and sector level data distribution.

 

OS based RAID (volume only management) falls into software RAID.  RAID on most consumer boards is the same, and this is why it is dependent on a driver for proper functionality.  Most modern servers also use this approach. 

 

In SS (on server setups), ZFS, and GlusterFS the splitting of data is done on the cluster level instead of the sector level.  This allows more robust error detection as well as very easy and fast expansion/shrinkage of volume size with the system online.  SS, ZFS, and GlusterFS can also pool disks from different systems together over a WAN/LAN/SAN where RAID can not (unless you want to create an unreliable and slow storage array).

See I learn something new every day. If that is so, then I (and seemingly most the rest of the internet) have been using software RAID terminology incorrectly when referring to arrays of disks. Even the Wiki article lump ZFS and BTRFS in RAID as "advanced file systems" lol

 

Edited by Razor Blade
Fixed last sentence

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

RAID 5 and even 6 are not viable with large drives. RAID 10 (RAID 1 + RAID 0) is a simple and effective solution for larger arrays.

 

RAID 0 - offers no protection, simply amalgamates the drives into a single filespace, at least two drives but can be more.

RAID 1 - offers single unit failure protection, pairs of drives are mirrors of each other.

RAID 10 - combination of RAID 0 and RAID 1, logically two RAID 0 arrays are mirrored in a RAID 1 array.

 

80+ ratings certify electrical efficiency. Not quality.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, brob said:

RAID 10 - combination of RAID 0 and RAID 1, logically two RAID 0 arrays are mirrored in a RAID 1 array.

Raid 10 or Raid 01 in your opinion?

Don't forget to quote or mention me

 


 

 

Primary PC:

CPU: i5-8600K @Stock  GPU: RTX 2060 Zotac GAMING Amp  RAM: 4x4GB 2400 MHz DDR4 Crucial Ballistix Sport LT  MOBO: Asus Prime Z390-A  HDD: 4 TB 5400 RPM Seagate Barracuda, 2TB 7200 RPM Seagate Backup Plus Ultra Slim  SSD: Inland Professional 120 GB  Soundcard: built in  Case: NZXT H500i  Screen: HP 22cwa IPS

 

Server: Working on it (See https://linustechtips.com/main/topic/1039535-server-build/)

 

Laptop : Microsoft Surface Pro 5:

CPU: i5-7300U  GPU: Intel HD 620  RAM: 2x4GB DDR3 1866 MHz  MOBO: Microsoft Custom  SSD: Internal M.2  Soundcard: built in  Case: lol its a laptop  Screen: see case: lol its a laptop

 

Phone: Google Pixel:

CPU: Qualcomm 821  GPU: Adreno 530  RAM: 4GB LPDDR4X  Storage: 32GB eMMC  Display: 5" 16:9 1080x1920p Corning Gorilla Glass 4

 

Dog: Shorty, Absolute Mutt:

Ears: Floppy  Tail: Long  Paws: Muddy  Fur: Brown

 

Cats: Chili and Cheddar (Don't Ask):

Cute: Yes  Fur: Soft  Tail: In front of you whacking your face

 

Cereal: 

Dry: NOPE  With Milk: Cinnamon Toast Crunch  Milk: Whole % Vitamin D  Hot: Quaker Instant Maple  Steel Cooked: Wegmans with Sweetened Condensed Milk

 

Coffee:

Type: Latte  Caffeinated: very much so  Milk: Yes

 

Game consoles:

PC ALL THE WAY

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AKinsey2468 said:

Raid 10 or Raid 01 in your opinion?

 

Implementation dependent, but RAID 1+0 is a usually a little more fault tolerant. https://www.thegeekstuff.com/2011/10/raid10-vs-raid01/ offers a simple and good explanation of the theoretical differences.

80+ ratings certify electrical efficiency. Not quality.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, brob said:

 

Implementation dependent, but RAID 1+0 is a usually a little more fault tolerant. https://www.thegeekstuff.com/2011/10/raid10-vs-raid01/ offers a simple and good explanation of the theoretical differences.

Thanks

Don't forget to quote or mention me

 


 

 

Primary PC:

CPU: i5-8600K @Stock  GPU: RTX 2060 Zotac GAMING Amp  RAM: 4x4GB 2400 MHz DDR4 Crucial Ballistix Sport LT  MOBO: Asus Prime Z390-A  HDD: 4 TB 5400 RPM Seagate Barracuda, 2TB 7200 RPM Seagate Backup Plus Ultra Slim  SSD: Inland Professional 120 GB  Soundcard: built in  Case: NZXT H500i  Screen: HP 22cwa IPS

 

Server: Working on it (See https://linustechtips.com/main/topic/1039535-server-build/)

 

Laptop : Microsoft Surface Pro 5:

CPU: i5-7300U  GPU: Intel HD 620  RAM: 2x4GB DDR3 1866 MHz  MOBO: Microsoft Custom  SSD: Internal M.2  Soundcard: built in  Case: lol its a laptop  Screen: see case: lol its a laptop

 

Phone: Google Pixel:

CPU: Qualcomm 821  GPU: Adreno 530  RAM: 4GB LPDDR4X  Storage: 32GB eMMC  Display: 5" 16:9 1080x1920p Corning Gorilla Glass 4

 

Dog: Shorty, Absolute Mutt:

Ears: Floppy  Tail: Long  Paws: Muddy  Fur: Brown

 

Cats: Chili and Cheddar (Don't Ask):

Cute: Yes  Fur: Soft  Tail: In front of you whacking your face

 

Cereal: 

Dry: NOPE  With Milk: Cinnamon Toast Crunch  Milk: Whole % Vitamin D  Hot: Quaker Instant Maple  Steel Cooked: Wegmans with Sweetened Condensed Milk

 

Coffee:

Type: Latte  Caffeinated: very much so  Milk: Yes

 

Game consoles:

PC ALL THE WAY

Link to comment
Share on other sites

Link to post
Share on other sites

Right. So probably going to implement ZFS or RAID 10/01. Leaning towards RAID right now due to everyone's explanations. (thanks @brob @Razor Blade and @KarathKasun) Does anyone have some recommendations for software RAID 01/10 tools or maybe even "cheap" RAID cards? (I say cheap with quotations because I'm not sure if something like that exists...)

 

TIA,

my face

Don't forget to quote or mention me

 


 

 

Primary PC:

CPU: i5-8600K @Stock  GPU: RTX 2060 Zotac GAMING Amp  RAM: 4x4GB 2400 MHz DDR4 Crucial Ballistix Sport LT  MOBO: Asus Prime Z390-A  HDD: 4 TB 5400 RPM Seagate Barracuda, 2TB 7200 RPM Seagate Backup Plus Ultra Slim  SSD: Inland Professional 120 GB  Soundcard: built in  Case: NZXT H500i  Screen: HP 22cwa IPS

 

Server: Working on it (See https://linustechtips.com/main/topic/1039535-server-build/)

 

Laptop : Microsoft Surface Pro 5:

CPU: i5-7300U  GPU: Intel HD 620  RAM: 2x4GB DDR3 1866 MHz  MOBO: Microsoft Custom  SSD: Internal M.2  Soundcard: built in  Case: lol its a laptop  Screen: see case: lol its a laptop

 

Phone: Google Pixel:

CPU: Qualcomm 821  GPU: Adreno 530  RAM: 4GB LPDDR4X  Storage: 32GB eMMC  Display: 5" 16:9 1080x1920p Corning Gorilla Glass 4

 

Dog: Shorty, Absolute Mutt:

Ears: Floppy  Tail: Long  Paws: Muddy  Fur: Brown

 

Cats: Chili and Cheddar (Don't Ask):

Cute: Yes  Fur: Soft  Tail: In front of you whacking your face

 

Cereal: 

Dry: NOPE  With Milk: Cinnamon Toast Crunch  Milk: Whole % Vitamin D  Hot: Quaker Instant Maple  Steel Cooked: Wegmans with Sweetened Condensed Milk

 

Coffee:

Type: Latte  Caffeinated: very much so  Milk: Yes

 

Game consoles:

PC ALL THE WAY

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, AKinsey2468 said:

Right. So probably going to implement ZFS or RAID 10/01. Leaning towards RAID right now due to everyone's explanations. (thanks @brob @Razor Blade and @KarathKasun) Does anyone have some recommendations for software RAID 01/10 tools or maybe even "cheap" RAID cards? (I say cheap with quotations because I'm not sure if something like that exists...)

 

TIA,

my face

With two disks you can not do 1+0/0+1.  They require four disks minimum and are not very fault tolerant.  They allow for 100% recovery of a single disk failure and a 50/50 chance for 100% recovery for a two disk failure.  If you have a full stripe failure, your have 0% recovery.

 

Disk failures do not necessarily mean that the drive dies, data cabling failures and power cabling failures also can cause array corruption.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, KarathKasun said:

With two disks you can not do 1+0/0+1.  They require four disks minimum and are not very fault tolerant.  They allow for 100% recovery of a single disk failure and a 50/50 chance for 100% recovery for a two disk failure.  If you have a full stripe failure, your have 0% recovery.

 

Disk failures do not necessarily mean that the drive dies, data cabling failures and power cabling failures also can cause array corruption.

I thought 3 disk minimum (Yeah, IK that 4 drives is considered better though.) https://www.thegeekstuff.com/2011/10/raid10-vs-raid01/ this article says on raid 10, (with more fault tolerence,) drives 1,3 and 5 in a 6 drive array can fail, where in raid 01 with 6 drives, just 1 and 4 will wipe it all out... Correct or fake news?

Don't forget to quote or mention me

 


 

 

Primary PC:

CPU: i5-8600K @Stock  GPU: RTX 2060 Zotac GAMING Amp  RAM: 4x4GB 2400 MHz DDR4 Crucial Ballistix Sport LT  MOBO: Asus Prime Z390-A  HDD: 4 TB 5400 RPM Seagate Barracuda, 2TB 7200 RPM Seagate Backup Plus Ultra Slim  SSD: Inland Professional 120 GB  Soundcard: built in  Case: NZXT H500i  Screen: HP 22cwa IPS

 

Server: Working on it (See https://linustechtips.com/main/topic/1039535-server-build/)

 

Laptop : Microsoft Surface Pro 5:

CPU: i5-7300U  GPU: Intel HD 620  RAM: 2x4GB DDR3 1866 MHz  MOBO: Microsoft Custom  SSD: Internal M.2  Soundcard: built in  Case: lol its a laptop  Screen: see case: lol its a laptop

 

Phone: Google Pixel:

CPU: Qualcomm 821  GPU: Adreno 530  RAM: 4GB LPDDR4X  Storage: 32GB eMMC  Display: 5" 16:9 1080x1920p Corning Gorilla Glass 4

 

Dog: Shorty, Absolute Mutt:

Ears: Floppy  Tail: Long  Paws: Muddy  Fur: Brown

 

Cats: Chili and Cheddar (Don't Ask):

Cute: Yes  Fur: Soft  Tail: In front of you whacking your face

 

Cereal: 

Dry: NOPE  With Milk: Cinnamon Toast Crunch  Milk: Whole % Vitamin D  Hot: Quaker Instant Maple  Steel Cooked: Wegmans with Sweetened Condensed Milk

 

Coffee:

Type: Latte  Caffeinated: very much so  Milk: Yes

 

Game consoles:

PC ALL THE WAY

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, AKinsey2468 said:

I thought 3 disk minimum (Yeah, IK that 4 drives is considered better though.) https://www.thegeekstuff.com/2011/10/raid10-vs-raid01/ this article says on raid 10, (with more fault tolerence,) drives 1,3 and 5 in a 6 drive array can fail, where in raid 01 with 6 drives, just 1 and 4 will wipe it all out... Correct or fake news?

Depends on how the drives are laid out.

 

Remember that 10/01 are mirrored stripes or stripes mirrored , there is a difference in how the data is laid out.

https://ferdinand-muetsch.de/why-raid-10-is-better-than-raid-01.html

 

Three disks are the minimum for RAID 5, which is stripes with distributed parity.  This is single drive failure tolerant.  RAID 6 is stripes with doubled parity, so two disk failure tolerant.  10 or 01 REQUIRE 4 disks.

Link to comment
Share on other sites

Link to post
Share on other sites

K then. One more question. Raid 5/6 or 10/01 (for that ik I probably would go 10 as there is more failure tolerance) 'cause I don't know as much about 5 or 6 as my newfound knowledge about 10/01

Don't forget to quote or mention me

 


 

 

Primary PC:

CPU: i5-8600K @Stock  GPU: RTX 2060 Zotac GAMING Amp  RAM: 4x4GB 2400 MHz DDR4 Crucial Ballistix Sport LT  MOBO: Asus Prime Z390-A  HDD: 4 TB 5400 RPM Seagate Barracuda, 2TB 7200 RPM Seagate Backup Plus Ultra Slim  SSD: Inland Professional 120 GB  Soundcard: built in  Case: NZXT H500i  Screen: HP 22cwa IPS

 

Server: Working on it (See https://linustechtips.com/main/topic/1039535-server-build/)

 

Laptop : Microsoft Surface Pro 5:

CPU: i5-7300U  GPU: Intel HD 620  RAM: 2x4GB DDR3 1866 MHz  MOBO: Microsoft Custom  SSD: Internal M.2  Soundcard: built in  Case: lol its a laptop  Screen: see case: lol its a laptop

 

Phone: Google Pixel:

CPU: Qualcomm 821  GPU: Adreno 530  RAM: 4GB LPDDR4X  Storage: 32GB eMMC  Display: 5" 16:9 1080x1920p Corning Gorilla Glass 4

 

Dog: Shorty, Absolute Mutt:

Ears: Floppy  Tail: Long  Paws: Muddy  Fur: Brown

 

Cats: Chili and Cheddar (Don't Ask):

Cute: Yes  Fur: Soft  Tail: In front of you whacking your face

 

Cereal: 

Dry: NOPE  With Milk: Cinnamon Toast Crunch  Milk: Whole % Vitamin D  Hot: Quaker Instant Maple  Steel Cooked: Wegmans with Sweetened Condensed Milk

 

Coffee:

Type: Latte  Caffeinated: very much so  Milk: Yes

 

Game consoles:

PC ALL THE WAY

Link to comment
Share on other sites

Link to post
Share on other sites

yikes. RAID 60, 50, 100, 10, 01 5, 6, My head is spinning

Don't forget to quote or mention me

 


 

 

Primary PC:

CPU: i5-8600K @Stock  GPU: RTX 2060 Zotac GAMING Amp  RAM: 4x4GB 2400 MHz DDR4 Crucial Ballistix Sport LT  MOBO: Asus Prime Z390-A  HDD: 4 TB 5400 RPM Seagate Barracuda, 2TB 7200 RPM Seagate Backup Plus Ultra Slim  SSD: Inland Professional 120 GB  Soundcard: built in  Case: NZXT H500i  Screen: HP 22cwa IPS

 

Server: Working on it (See https://linustechtips.com/main/topic/1039535-server-build/)

 

Laptop : Microsoft Surface Pro 5:

CPU: i5-7300U  GPU: Intel HD 620  RAM: 2x4GB DDR3 1866 MHz  MOBO: Microsoft Custom  SSD: Internal M.2  Soundcard: built in  Case: lol its a laptop  Screen: see case: lol its a laptop

 

Phone: Google Pixel:

CPU: Qualcomm 821  GPU: Adreno 530  RAM: 4GB LPDDR4X  Storage: 32GB eMMC  Display: 5" 16:9 1080x1920p Corning Gorilla Glass 4

 

Dog: Shorty, Absolute Mutt:

Ears: Floppy  Tail: Long  Paws: Muddy  Fur: Brown

 

Cats: Chili and Cheddar (Don't Ask):

Cute: Yes  Fur: Soft  Tail: In front of you whacking your face

 

Cereal: 

Dry: NOPE  With Milk: Cinnamon Toast Crunch  Milk: Whole % Vitamin D  Hot: Quaker Instant Maple  Steel Cooked: Wegmans with Sweetened Condensed Milk

 

Coffee:

Type: Latte  Caffeinated: very much so  Milk: Yes

 

Game consoles:

PC ALL THE WAY

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, AKinsey2468 said:

yikes. RAID 60, 50, 100, 10, 01 5, 6, My head is spinning

If you have large arrays 10 is better but nukes half of your capacity.  For your use case 5 (ZFS RAID-Z1) should be fine and yields number of disks minus 1 disk of storage capacity.

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, KarathKasun said:

If you have large arrays 10 is better but nukes half of your capacity.  For your use case 5 (ZFS RAID-Z1) should be fine and yields number of disks minus 1 disk of storage capacity.

Thatnks for trying, but i don't think that was dumbed down enough. It isn't possible to dip down to my level for someone like you...

Don't forget to quote or mention me

 


 

 

Primary PC:

CPU: i5-8600K @Stock  GPU: RTX 2060 Zotac GAMING Amp  RAM: 4x4GB 2400 MHz DDR4 Crucial Ballistix Sport LT  MOBO: Asus Prime Z390-A  HDD: 4 TB 5400 RPM Seagate Barracuda, 2TB 7200 RPM Seagate Backup Plus Ultra Slim  SSD: Inland Professional 120 GB  Soundcard: built in  Case: NZXT H500i  Screen: HP 22cwa IPS

 

Server: Working on it (See https://linustechtips.com/main/topic/1039535-server-build/)

 

Laptop : Microsoft Surface Pro 5:

CPU: i5-7300U  GPU: Intel HD 620  RAM: 2x4GB DDR3 1866 MHz  MOBO: Microsoft Custom  SSD: Internal M.2  Soundcard: built in  Case: lol its a laptop  Screen: see case: lol its a laptop

 

Phone: Google Pixel:

CPU: Qualcomm 821  GPU: Adreno 530  RAM: 4GB LPDDR4X  Storage: 32GB eMMC  Display: 5" 16:9 1080x1920p Corning Gorilla Glass 4

 

Dog: Shorty, Absolute Mutt:

Ears: Floppy  Tail: Long  Paws: Muddy  Fur: Brown

 

Cats: Chili and Cheddar (Don't Ask):

Cute: Yes  Fur: Soft  Tail: In front of you whacking your face

 

Cereal: 

Dry: NOPE  With Milk: Cinnamon Toast Crunch  Milk: Whole % Vitamin D  Hot: Quaker Instant Maple  Steel Cooked: Wegmans with Sweetened Condensed Milk

 

Coffee:

Type: Latte  Caffeinated: very much so  Milk: Yes

 

Game consoles:

PC ALL THE WAY

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AKinsey2468 said:

K then. One more question. Raid 5/6 or 10/01 (for that ik I probably would go 10 as there is more failure tolerance) 'cause I don't know as much about 5 or 6 as my newfound knowledge about 10/01

RAID 5 in particular, but to some extent RAID 6 are not suitable for large drives. The issue revolves around the time it takes to rebuild the array. A 2x2TB RAID 5 array can take over 24 hrs to rebuild. During that time, a new fault will cause the loss of the entire array.

 

RAID levels above 0 are intended to eliminate downtime caused by storage hardware faults. They do not eliminate the need for good backups.

 

If you use two drives, RAID 1 (mirrored drives) is a suitable solution. To expand storage one can then add pairs of drives, changing to a stripped mirrored array (10). Most higher-end motherboards support this through the chipset.

80+ ratings certify electrical efficiency. Not quality.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AKinsey2468 said:

Thatnks for trying, but i don't think that was dumbed down enough. It isn't possible to dip down to my level for someone like you...

RAID 5 example...

4x 4tb disks used, 12tb of resulting disk space.  The extra 4tb of space is used for redundancy.  One drive can fail without data loss.

 

Minimum 3 disks.

 

RAID 1 example...

4x 4tb disks used, 4tb resulting disk space.  Each disk is a copy of the other.  Three disks can fail without data loss.

 

Minimum 2 disks.

 

RAID 0 example...

4x 4tb disks used, 16tb resulting disk space.  The data is split between disks. Zero disks can fail, any failure results in data loss.

 

Minimum 2 disks.

 

RAID 10 example...

4x 4tb disks used, 8tb resulting disk space.  The data is striped into 2 or more stripe sets with each stripe having a mirror.  One disk in each stripe can fail without data loss.

 

Minimum 4 disks.

 

RAID 01 example...

4x 4tb disks used,  8tb resulting disk space.  The data is mirrored into two striped sets of disks.  One drive in each mirror can fail without data loss.

 

Minimum 4 disks.

 

RAID 10 and 01 are only significantly different as you move to a larger number of disks, where 10 is better.  More can fail in either layout if there are more drives, if both drives of a stripe number fail, the whole array goes down.

Link to comment
Share on other sites

Link to post
Share on other sites

What ever you decide to do make sure you know what you're buying.

4 hours ago, AKinsey2468 said:

Right. So probably going to implement ZFS or RAID 10/01. Leaning towards RAID right now due to everyone's explanations. (thanks @brob @Razor Blade and @KarathKasun) Does anyone have some recommendations for software RAID 01/10 tools or maybe even "cheap" RAID cards? (I say cheap with quotations because I'm not sure if something like that exists...)

 

TIA,

my face

If I were in your position, I would still go for an HBA and use the OS to assemble your array. It is a lot more flexible and tends to be simpler to maintain IMO.

 

If you decide to get a physical hardware RAID card and use that to assemble your array, make sure you fully understand the pros and cons of going that route. For example if you use the write cache function a battery is VERY necessary to prevent big problems from a potential power loss. Also as part of maintenance, make sure you keep up with firmware updates and install the necessary ones when possible. If your card dies, you'll need to import your array to a new card (LSI in my experience does really well with this) but if your firmware is a dinosaur it is hard to tell what impact the old firmware would have importing your array to a newer card...that said depending on the OS and card you go with, there are things like LSI's CLI tool to communicate with your controller without having to reboot which makes managing your card way easier.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×