Jump to content

Raid 0 and Raid 1 Combo

ThePurrringWalrus

Here is what I have going on.

 

I have 4 860 EVO 1TB m.2 SSDs.

 

I have 2 2.5 drive bay raid adapters with on board Raid 0 or Raid 1 capability for the two drives installed on each card. Each adapter card shows up as a single drive and uses only one SATA cable.

 

My mobo has integrated Raid control.

 

I'm thinking I should be able to run each adapter in Raid 0 for the on board m.2 drives, the use Raid 1 on the mobo for redundancy. That way I caan basically have a fast 2TB array for editing, while still having full redundancy in case one of the SSDs releases its factory installed blue smoke.

 

Does anyone see this not working?

 

Link to comment
Share on other sites

Link to post
Share on other sites

Id stop using raid on those cheap cards, it normally sucks. They also use the cpu anyways, so your not saving cpu power either. Also don't use the motherboard raid, its bad aswell.

 

Then just use a software raid 10 for the whole thing.

 

Also make sure you have backups aswell

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Sounds like a lot of needless complexity. I'd just get an NVME drive and cut down on the points of potential failure.

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, dizmo said:

Sounds like a lot of needless complexity. I'd just get an NVME drive and cut down on the points of potential failure.

There are 4 nvme drives i already have. So, it is easier to use what i have than get something else. I've used the adapters and drives in Raid 1 (2 m.2 drives per array). 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, ThePurrringWalrus said:

There are 4 nvme drives i already have. So, it is easier to use what i have than get something else. I've used the adapters and drives in Raid 1 (2 m.2 drives per array). 

You have 8 drives in your system?

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Electronics Wizardy said:

Id stop using raid on those cheap cards, it normally sucks. They also use the cpu anyways, so your not saving cpu power either. Also don't use the motherboard raid, its bad aswell.

 

Then just use a software raid 10 for the whole thing.

 

Also make sure you have backups aswell

 

 

Well I have 2 free SATA slots and 4 drives. The hardware adapters have their own independent Raid controllers independent from the rest of the system. So in that respect there is no CPU load for them. I can literally plug them into a different system and each adapter reads as a single drive. The mobo also has a standalone Raid controller.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, dizmo said:

You have 8 drives in your system?

yes. I 4 Raid 1 arrays. 1 on my pcie bus for os and software. 2 arrays to dump projects I'm working on, and another array with SSHDs for mass storage 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ThePurrringWalrus said:

Well I have 2 free SATA slots and 4 drives. The hardware adapters have their own independent Raid controllers independent from the rest of the system. So in that respect there is no CPU load for them. I can literally plug them into a different system and each adapter reads as a single drive. The mobo also has a standalone Raid controller.

What model are those cards.

 

Id turn off raid on those cards, and use them for jbod. You really don't want those randling raid.

 

What motherboard? Don't use the onboard raid controller, it sucks.

 

Just use software raid here, its much better. What os are you running

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ThePurrringWalrus said:

yes. I 4 Raid 1 arrays. 1 on my pcie bus for os and software. 2 arrays to dump projects I'm working on, and another array with SSHDs for mass storage 

well.. nvme is way faster than spinndisks, but i stopped using raid 1 due to constant writeback that made the system hang 2 3 seconds.. 

what data is it you have that are so critical you need redundant drives?  it's usually a waste of time. 

go buy a NAS and put your 4 860's in there in raid 5, and use raid 0 for your nvme's if you want larger boot drive. 

if you want faster NAS, you get 10GB nic's on the NAS and for your computer, that you connect a cable directly on a separate IP range, no gateway. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Robchil said:

well.. nvme is way faster than spinndisks, but i stopped using raid 1 due to constant writeback that made the system hang 2 3 seconds.. 

what data is it you have that are so critical you need redundant drives?  it's usually a waste of time. 

go buy a NAS and put your 4 860's in there in raid 5, and use raid 0 for your nvme's if you want larger boot drive. 

if you want faster NAS, you get 10GB nic's on the NAS and for your computer, that you connect a cable directly on a separate IP range, no gateway. 

 

The purpose of the question was to use hardware i already have. I'm not looking for a new hardware solution. I am planning an upgrade later in the year, but right now I'm just trying to work with what I have. Additionally,  my current setup is maxed out on my expansion slots  i don't have any room for a new NIC anyway. 

Link to comment
Share on other sites

Link to post
Share on other sites

i would just mount them standard and software raid them to raid 5.. atleast you get 3GB usable space, and redunancy on the drives. drop the raid 1.. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Robchil said:

i would just mount them standard and software raid them to raid 5.. atleast you get 3GB usable space, and redunancy on the drives. drop the raid 1.. 

id go raid 10 for performanc, parity raid isnt the best for performance

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

id go raid 10 for performanc, parity raid isnt the best for performance

it's nvme's.. does it realy matter? .. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Robchil said:

it's nvme's.. does it realy matter? .. 

 

Yea, parity raid has issue like the write hole, and can cause extra io to happen when its not needed.

 

Also raid is often slower than a single drive, esp with nvme drives with the overhead.

 

Link to comment
Share on other sites

Link to post
Share on other sites

if you want them in a raid 10 with those controllers you will have to set  both sets up in raid 0 and mirror them in software anyway. 

mirroring will have delays doing mirror writebacks too. 

or yeah set them up in mirrors.. and stripe them in software.. wouldn't it be better to do all in software then anyway?. less chances of conflicts atleast.. mixing raid like that causes more problems than it solves. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Robchil said:

if you want them in a raid 10 with those controllers you will have to set  both sets up in raid 0 and mirror them in software anyway. 

mirroring will have delays doing mirror writebacks too. 

or yeah set them up in mirrors.. and stripe them in software.. wouldn't it be better to do all in software then anyway?. less chances of conflicts atleast.. mixing raid like that causes more problems than it solves. 

 

Yea do it all in software, don't use the motherboard raid or the cheap raid cards. id just make a big software raid 10.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Electronics Wizardy said:

Yea do it all in software, don't use the motherboard raid or the cheap raid cards. id just make a big software raid 10.

I feel a need to ask for details. Is software really better, or just more convenient?

 

Let's look at hardware for a second.

 

In the later 1970s the IEEE defined the BIOS standard of hardware and machine code housed on an 8 pin EPROM. It stored information for the CPU and OS for addressing the system hardware. 

 

As technology advanced, BIOS standard limitations were hit, so in the late 1980s and early 1990s the BIOS standard was revised for a 16 pin EPROM. It could store more data, but was functally the same. The OS would make a hardware call to the CPU and the BIOS (operating like a dumb switch) would pass data to a specific hardware address, all controlled by the CPU, taking up processing cycles and RAM. In later versions of Windows, the BIOS addressing was mirrored in system memory to increase performance speed.

 

Then in the 2000s, BIOS hit a hard wall. The new UEFI standard was developed, which was only a machine code standard, free of the BIOS hardware constraints.

 

One of the new features was to offload I/O call from the CPU to the UEFI controller (processor). Instead of the O/S making hardware calls to the CPU. It can call directly to the UEFI.

 

The UEFI works more like a router. It addresses all of the system hardware, and instead of the CPU and O/S directing how the I/O happens, the UEFI takes the call and routes calls to the appropriate hardware.

 

So, with UEFI the hardware can talk to hardware without the O/S interpreting.

 

Effectively, with a LAN enabled MOBO, you can run a dumb terminal in UEFI without a CPU, minimal RAM, and no mass storage. 

 

Getting back to Raid. An inboard Rais controller has its own processor to handle I/O. It is handled by the UEFI controller directly. So when there is an I/O call to mass storage, the CPU offloads to the UEFI. 

 

In software Raid, the CPU handles the I/O. In a 2 drive array, the O/S and CPU sends 2 I/O calls and has to do parity checks and 2 CRC calls to verify data integrity. The drive mapping is stored in RAM and there are processor cycles used to maintain the drive array structure.

 

In hardware control, the stand alone processor maintains the drive array structure and does not need the CPU or system memory to control I/O.

 

So in hardware control, we cut out CPU processing cycles, RAM usage, and push I/O and data processing to special function processors and controllers.

 

So how is using software, using up CPU processing and system memory,  but still having to use the special function processors and controllers better?

 

Is it "really" better? Or is it convince? Using more resources for the same result versus more time and effort setting up a more efficient system

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, ThePurrringWalrus said:

I feel a need to ask for details. Is software really better, or just more convenient?

Better and more convient. You can get data protection, faster rebuilds.

 

Also this isn't using real hardware raid either, there chips are still using the cpu for all the raid work.

 

Also the amount of cpu overhead is tiny with modern chips.

 

34 minutes ago, ThePurrringWalrus said:

Getting back to Raid. An inboard Rais controller has its own processor to handle I/O. It is handled by the UEFI controller directly. So when there is an I/O call to mass storage, the CPU offloads to the UEFI. 

The problem is you don't have a deticated processor for raid here. The cpu is still doing all the work on these cheap cards, and the motherboard raid, Its a driver that doing the real work in the os. You need a real raid card if you want actual hardware raid.

 

The thing is you are using software, but with those cards, its a propertiery standard, that will be much harder to recover from,  slower than tradiontal speed.

 

You still haven't said what cards your using, but most good raid cards are using, but if it only has 2 connectors, and doesn't have sas, its probaby not a real raid card.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Electronics Wizardy said:

Better and more convient. You can get data protection, faster rebuilds.

 

Also this isn't using real hardware raid either, there chips are still using the cpu for all the raid work.

 

Also the amount of cpu overhead is tiny with modern chips.

 

The problem is you don't have a deticated processor for raid here. The cpu is still doing all the work on these cheap cards, and the motherboard raid, Its a driver that doing the real work in the os. You need a real raid card if you want actual hardware raid.

 

The thing is you are using software, but with those cards, its a propertiery standard, that will be much harder to recover from,  slower than tradiontal speed.

 

You still haven't said what cards your using, but most good raid cards are using, but if it only has 2 connectors, and doesn't have sas, its probaby not a real raid card.

The cards are only designed to handle raid 0 and raid 1. I kinda think this is a trade off made for form factor. They are designed to fit a 2.5 drive mount. They do have a 4 drive, 2 SATA port raid 10 capable solution which fits a 3.5 drive mount, but in my system, my only 3.5 mount is holding 2 SSHDs. The card is a SkyTech S322M225R, and raid config is handled by hardware jumpers. Basically, it funnels 2 m.2 SATA SSDs in to one SATA connector. So I take a data bandwidth hit here, but it is still faster than two spinning drives.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Electronics Wizardy said:

Better and more convient. You can get data protection, faster rebuilds.

 

Also this isn't using real hardware raid either, there chips are still using the cpu for all the raid work.

 

Also the amount of cpu overhead is tiny with modern chips.

 

The problem is you don't have a deticated processor for raid here. The cpu is still doing all the work on these cheap cards, and the motherboard raid, Its a driver that doing the real work in the os. You need a real raid card if you want actual hardware raid.

 

The thing is you are using software, but with those cards, its a propertiery standard, that will be much harder to recover from,  slower than tradiontal speed.

 

You still haven't said what cards your using, but most good raid cards are using, but if it only has 2 connectors, and doesn't have sas, its probaby not a real raid card.

well, since I'm doing Raid in the Hardware, the O/S only sees the arrays as single drives. Even if the CPU is handling the I/O it is still cutting out the O/S dual I/O calls and the stored Array Configuration in RAM. I would be interested to see some engineering data on the UEFI processor and see how much I/O offloading it dies from the CPU. 

 

For a RAID controller, we aren't talking a whole lot of compute cycles, but by cutting out the O/S management, we are freeing up memory blocks. 

 

No matter how much RAM you have, it is still divided in to minimum sized memory blocks. If the blick is 1 gig, and it is holding 15kB, the block is used. Kinda like hard drive sectors. 

 

Back in "THE DAY", large capacity drives were cool, but the sector size was freeking huge. I 15kB the file took a block in the same way a 1 MB file took a block. It was possible to max out a drive with only 25% of the data capacity used if all of the data blocks were used. It took quite a while for WD to figure that out.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, ThePurrringWalrus said:

The cards are only designed to handle raid 0 and raid 1. I kinda think this is a trade off made for form factor. They are designed to fit a 2.5 drive mount. They do have a 4 drive, 2 SATA port raid 10 capable solution which fits a 3.5 drive mount, but in my system, my only 3.5 mount is holding 2 SSHDs. The card is a SkyTech S322M225R, and raid config is handled by hardware jumpers. Basically, it funnels 2 m.2 SATA SSDs in to one SATA connector. So I take a data bandwidth hit here, but it is still faster than two spinning drives.

what controller?

 

6 minutes ago, ThePurrringWalrus said:

well, since I'm doing Raid in the Hardware, the O/S only sees the arrays as single drives. Even if the CPU is handling the I/O it is still cutting out the O/S dual I/O calls and the stored Array Configuration in RAM. I would be interested to see some engineering data on the UEFI processor and see how much I/O offloading it dies from the CPU. 

 

For a RAID controller, we aren't talking a whole lot of compute cycles, but by cutting out the O/S management, we are freeing up memory blocks. 

 

No matter how much RAM you have, it is still divided in to minimum sized memory blocks. If the blick is 1 gig, and it is holding 15kB, the block is used. Kinda like hard drive sectors. 

 

Back in "THE DAY", large capacity drives were cool, but the sector size was freeking huge. I 15kB the file took a block in the same way a 1 MB file took a block. It was possible to max out a drive with only 25% of the data capacity used if all of the data blocks were used. It took quite a while for WD to figure that out.

This isn't hardware raid. If you look at other software like mdadm on linux, the os can still see all the drives, its just hiding them from you, and showing it as one drive.

 

The cpu is still doing all the raid work, not the chipset on the board. If you want real hardware raid, you can get card that will go it from companies like broadcom(from the old lsi stuff)

 

IDK what your going on about with sectors size, I think you mean cluster size, basically all drives are 512b, until a few 4k drives are comming out now. Cluster size is all file system, and thats above the raid and drive level.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

what controller?

 

This isn't hardware raid. If you look at other software like mdadm on linux, the os can still see all the drives, its just hiding them from you, and showing it as one drive.

 

The cpu is still doing all the raid work, not the chipset on the board. If you want real hardware raid, you can get card that will go it from companies like broadcom(from the old lsi stuff)

 

IDK what your going on about with sectors size, I think you mean cluster size, basically all drives are 512b, until a few 4k drives are comming out now. Cluster size is all file system, and thats above the raid and drive level.

 

 

Well, I think we got off pint. The question was really, would it work?

 

But while we are off point....

 

I know Intel bakes a GPU into all their CPUs. It still uses the CPU I/O and system memory.  Most MOBO manufacturers for mud grade and above systems assume you will throw on a stand alone GPU with its own VRAM. 

 

So, if I was a good engineer, and I knew I had a separate (if not great) processor on the CPU chipper, could I handle my onboatd Raid processing using the GPU?

 

We know NVIDIA has played with the concept of doing processing on the GPU for non-graphics stuff with the RTX audio AI. As an engineer, would it really be that hard to offload smaller demand processes like I/O to the unused GPU on the CPU chipset?

 

It would remove CPU load from the primary cores, but really wouldn't degrade any system performance as you already are doing I/O calls from the O/S through the CPU.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ThePurrringWalrus said:

Well, I think we got off pint. The question was really, would it work?

 

But while we are off point....

 

I know Intel bakes a GPU into all their CPUs. It still uses the CPU I/O and system memory.  Most MOBO manufacturers for mud grade and above systems assume you will throw on a stand alone GPU with its own VRAM. 

 

So, if I was a good engineer, and I knew I had a separate (if not great) processor on the CPU chipper, could I handle my onboatd Raid processing using the GPU?

 

We know NVIDIA has played with the concept of doing processing on the GPU for non-graphics stuff with the RTX audio AI. As an engineer, would it really be that hard to offload smaller demand processes like I/O to the unused GPU on the CPU chipset?

 

It would remove CPU load from the primary cores, but really wouldn't degrade any system performance as you already are doing I/O calls from the O/S through the CPU.

It doesn't use the igpu, The intel raid works the same on chips without a igpu enabled. I think there would be a good amount of overhead doing it on the gpu, and It wouldn't help as you don't need the performance.

 

Also did you see this video from ltt

basically what you were trying to do, nested raid, thats ugly, don't do it.

 

Back to the orginal question, you can't use the fake raid on the motherboard with drives that aren't connected to the sata ports on the board.

 

But using software raid will be faster, easier to manage, more reliable, and lets you easily mix drives from different controllers.

This was

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ThePurrringWalrus said:

Well, I think we got off pint. The question was really, would it work?

 

But while we are off point....

 

I know Intel bakes a GPU into all their CPUs. It still uses the CPU I/O and system memory.  Most MOBO manufacturers for mud grade and above systems assume you will throw on a stand alone GPU with its own VRAM. 

 

So, if I was a good engineer, and I knew I had a separate (if not great) processor on the CPU chipper, could I handle my onboatd Raid processing using the GPU?

 

We know NVIDIA has played with the concept of doing processing on the GPU for non-graphics stuff with the RTX audio AI. As an engineer, would it really be that hard to offload smaller demand processes like I/O to the unused GPU on the CPU chipset?

 

It would remove CPU load from the primary cores, but really wouldn't degrade any system performance as you already are doing I/O calls from the O/S through the CPU.

I started using computers in 1979. Hard drives started off using sectors (a division of the physical platter space) clusters are a better way to handle large amounts of data by incorporating multiple sectors. As hard drive data density increased, the physical sectors became less important and clusters replaced the concept. Sorry....very old lingo. Remember, I saw Star Wars in theaters before it was Episode 4

 

Link to comment
Share on other sites

Link to post
Share on other sites

So based on responses, yes it can work but in the least efficient way possible.

 

But as a LTT fan, I'm going to figure out the most over the top way to either succeed, or make something blow up. Hopefully very dramatically. If only I had a host to drop everything before trying. Maybe I should use a 1200 watt stand alone power supply and use a cryogenic cooler.....for reasons. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×