Jump to content

Moving a RAID array across Chipsets

As the title suggests im wondering was the process is to move a RAID array from one motherboard chipset to another.

 

Currently i have a home media server runnin the AMD platform.  an A10 - 7870K specifically on a Biostar motherboard with hardware RAID 5 setup across 4 WD 3TB HDDs for storage of movies and TV shows that i hold digitally. (Ripped from legally obtained copies of course)

 

This setup is perfectly fine for me for now, but eventually (in like 2 years) i want to make the leap to intel (probably an X99 setup for the core ount). My questions are thus.

 

A - wtf will happen to the RAID doing this

B - If i need to rebuild the RAID will that action wipe the contents that are currently stored there

C - What is the best way to migrate a RAID to a new platform

Linux Daily Driver:

CPU: R5 2400G

Motherboard: MSI B350M Mortar

RAM: 32GB Corsair Vengeance LPX DDR4

HDD: 1TB POS HDD from an old Dell

SSD: 256GB WD Black NVMe M.2

Case: Phanteks Mini XL DS

PSU: 1200W Corsair HX1200

 

Gaming Rig:

CPU: i7 6700K @ 4.4GHz

Motherboard: Gigabyte Z270-N Wi-Fi ITX

RAM: 16GB Corsair Vengeance LPX DDR4

GPU: Asus Turbo GTX 1070 @ 2GHz

HDD: 3TB Toshiba something or other

SSD: 512GB WD Black NVMe M.2

Case: Shared with Daily - Phanteks Mini XL DS

PSU: Shared with Daily - 1200W Corsair HX1200

 

Server

CPU: Ryzen7 1700

Motherboard: MSI X370 SLI Plus

RAM: 8GB Corsair Vengeance LPX DDR4

GPU: Nvidia GT 710

HDD: 1X 10TB Seagate ironwolf NAS Drive.  4X 3TB WD Red NAS Drive.

SSD: Adata 128GB

Case: NZXT Source 210 (white)

PSU: EVGA 650 G2 80Plus Gold

Link to comment
Share on other sites

Link to post
Share on other sites

This is one of the biggest problems with using onboard RAID, it cannot be transferred across to a new board. You will need to offload any data you want to keep, move the drives over to the new board, then re-build the array (this will wipe the drives) and then load the data back onto the array. 

 

Unfortunately, there's no way round this with onboard RAID. 

 

EDIT: Ideally, you should have backups of the data anyway, so this isn't such a big problem if you do. However, a lot of people don't have backups (especially when they use RAID, mistaking it as an alternative to a backup)

Link to comment
Share on other sites

Link to post
Share on other sites

i was affraid of this.  Would it be worth it to by an idetical set of 4 3TB drives and just copy the data, or use an intermediary like crashplan in your opinion.  i dont mind having extra drives around.

Linux Daily Driver:

CPU: R5 2400G

Motherboard: MSI B350M Mortar

RAM: 32GB Corsair Vengeance LPX DDR4

HDD: 1TB POS HDD from an old Dell

SSD: 256GB WD Black NVMe M.2

Case: Phanteks Mini XL DS

PSU: 1200W Corsair HX1200

 

Gaming Rig:

CPU: i7 6700K @ 4.4GHz

Motherboard: Gigabyte Z270-N Wi-Fi ITX

RAM: 16GB Corsair Vengeance LPX DDR4

GPU: Asus Turbo GTX 1070 @ 2GHz

HDD: 3TB Toshiba something or other

SSD: 512GB WD Black NVMe M.2

Case: Shared with Daily - Phanteks Mini XL DS

PSU: Shared with Daily - 1200W Corsair HX1200

 

Server

CPU: Ryzen7 1700

Motherboard: MSI X370 SLI Plus

RAM: 8GB Corsair Vengeance LPX DDR4

GPU: Nvidia GT 710

HDD: 1X 10TB Seagate ironwolf NAS Drive.  4X 3TB WD Red NAS Drive.

SSD: Adata 128GB

Case: NZXT Source 210 (white)

PSU: EVGA 650 G2 80Plus Gold

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, MedievalMatt said:

i was affraid of this.  Would it be worth it to by an idetical set of 4 3TB drives and just copy the data, or use an intermediary like crashplan in your opinion.  i dont mind having extra drives around.

Do you have a backup solution in place? By the sounds of things, you don't or you wouldn't need new drives. 

 

If so, I would advise that you invest in one now, then you can use the backup to move the data across once you re-build the array, The solution you go for is up to you really. A single 10TB drive would cover the whole capacity of the array with 1TB to spare and should be cheaper than buying 4 more 3TB drives. 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Oshino Shinobu said:

Do you have a backup solution in place? By the sounds of things, you don't or you wouldn't need new drives. 

 

If so, I would advise that you invest in one now, then you can use the backup to move the data across once you re-build the array, The solution you go for is up to you really. A single 10TB drive would cover the whole capacity of the array with 1TB to spare and should be cheaper than buying 4 more 3TB drives. 

I would ideally want to go with something offiste.  Probably crashplan in the next few months.  Its not data that is mission critical for me, but would take a ton of time and be a PITA to re-rip. ( I have 10 months invested in ripping 8TB worth of movies and TVshows lol) But i think that thats what ill do then.  Thanks Mate.

Linux Daily Driver:

CPU: R5 2400G

Motherboard: MSI B350M Mortar

RAM: 32GB Corsair Vengeance LPX DDR4

HDD: 1TB POS HDD from an old Dell

SSD: 256GB WD Black NVMe M.2

Case: Phanteks Mini XL DS

PSU: 1200W Corsair HX1200

 

Gaming Rig:

CPU: i7 6700K @ 4.4GHz

Motherboard: Gigabyte Z270-N Wi-Fi ITX

RAM: 16GB Corsair Vengeance LPX DDR4

GPU: Asus Turbo GTX 1070 @ 2GHz

HDD: 3TB Toshiba something or other

SSD: 512GB WD Black NVMe M.2

Case: Shared with Daily - Phanteks Mini XL DS

PSU: Shared with Daily - 1200W Corsair HX1200

 

Server

CPU: Ryzen7 1700

Motherboard: MSI X370 SLI Plus

RAM: 8GB Corsair Vengeance LPX DDR4

GPU: Nvidia GT 710

HDD: 1X 10TB Seagate ironwolf NAS Drive.  4X 3TB WD Red NAS Drive.

SSD: Adata 128GB

Case: NZXT Source 210 (white)

PSU: EVGA 650 G2 80Plus Gold

Link to comment
Share on other sites

Link to post
Share on other sites

This is one of the reasons I've abondoned RAID in favor of Windows Storage Spaces, it's caused me a lot fewer headaches :)

Personally I mirror all my data to a NAS with 3x 3TB drives every night with robocopy running as a daily scheduled task. If you don't mind investing in the extra drives, grabbing a cheap NAS may be a good option depending on your preferences.

 

Link to comment
Share on other sites

Link to post
Share on other sites

If it was Intel chipset to Intel chipset you should have been fine, but AMD to Intel not so much. Once you have a backup of the data I'd give it a try anyway, cross platform raid migration can work but only if the very basic features have been set. I've done it between different LSI cards with different OEM firmware before and from Adaptec to LSI, but I've also had it fail.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

If it was Intel chipset to Intel chipset you should have been fine, but AMD to Intel not so much. Once you have a backup of the data I'd give it a try anyway, cross platform raid migration can work but only if the very basic features have been set. I've done it between different LSI cards with different OEM firmware before and from Adaptec to LSI, but I've also had it fail.

AFAIK, even going from Intel to Intel, there's no way to actually migrate an onboard RAID setup. If it were dedicated hardware or software RAID, it would be different, but onboard is generally kind of crap if you want to move anything or really do anything other than having some redundancy. 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Oshino Shinobu said:

AFAIK, even going from Intel to Intel, there's no way to actually migrate an onboard RAID setup. If it were dedicated hardware or software RAID, it would be different, but onboard is generally kind of crap if you want to move anything or really do anything other than having some redundancy. 

It should still work, every disk in RAID gets a header written to it with the full array configuration. This is used to health check/rebuild/migrate the array. I have zero trust in onbaord RAID though so my onboard RAID use has been extremely limited, about a weeks worth years ago when I was running 4x WD velociraptors. Doing large transfers caused the system to become unstable due to the chipset load, chucked in a spare hw card and no more problems :P.

 

P.S. Using should in the sense that I have no idea other than technically it should, likelihood is a different story.

 

Edited by leadeater
Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Cylon Muffins said:

This is one of the reasons I've abondoned RAID in favor of Windows Storage Spaces, it's caused me a lot fewer headaches :)

Personally I mirror all my data to a NAS with 3x 3TB drives every night with robocopy running as a daily scheduled task. If you don't mind investing in the extra drives, grabbing a cheap NAS may be a good option depending on your preferences.

 

Windows Storage Spaces is just a software RAID, you know that right? I agree that hardware RAIDs are a thing of the past however, everything should move to software :P

Intel I9-9900k (5Ghz) Asus ROG Maximus XI Formula | Corsair Vengeance 16GB DDR4-4133mhz | ASUS ROG Strix 2080Ti | EVGA Supernova G2 1050w 80+Gold | Samsung 950 Pro M.2 (512GB) + (1TB) | Full EK custom water loop |IN-WIN S-Frame (No. 263/500)

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Altecice said:

Windows Storage Spaces is just a software RAID, you know that right? I agree that hardware RAIDs are a thing of the past however, everything should move to software :P

And what if your Windows install becomes corrupted and it borks the array?

Quote and/or tag people using @ otherwise they don't get notified of your response!

 

The HUMBLE Computer:

AMD Ryzen 7 3700X • Noctua NH-U12A • ASUS STRIX X570-F • Corsair Vengeance LPX 32GB (2x16GB) DDR4 3200MHz CL16 • GIGABYTE Nvidia GTX1080 G1 • FRACTAL DESIGN Define C w/ blue Meshify C front • Corsair RM750x (2018) • OS: Kingston KC2000 1TB GAMES: Intel 660p 1TB DATA: Seagate Desktop 2TB • Acer Predator X34P 34" 3440x1440p 120 Hz IPS curved Ultrawide • Corsair STRAFE RGB Cherry MX Brown • Logitech G502 HERO / Logitech MX Master 3

 

Notebook:  HP Spectre x360 13" late 2018

Core i7 8550U • 16GB DDR3 RAM • 512GB NVMe SSD • 13" 1920x1080p 120 Hz IPS touchscreen • dual Thunderbolt 3

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, vojta.pokorny said:

And what if your Windows install becomes corrupted and it borks the array?

That can be said of anything, hardware RAID, Linux/BSD, Windows. Windows does have ReFS now so you aren't going to easily kill the entire array, not without something that would equally kill a ZFS array i.e multiple disk controller failures.

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, leadeater said:

That can be said of anything, hardware RAID, Linux/BSD, Windows. Windows does have ReFS now so you aren't going to easily kill the entire array, not without something that would equally kill a ZFS array i.e multiple disk controller failures.

Meaning that if the options were software RAID5 in Windows Storage Spaces, software RAID5 through disk management, or hardware RAID5 using Intel onboard RAID controller, which one would you recommend? I'm curious because I will be making some changes to my storage configuration in about 6 months.

Quote and/or tag people using @ otherwise they don't get notified of your response!

 

The HUMBLE Computer:

AMD Ryzen 7 3700X • Noctua NH-U12A • ASUS STRIX X570-F • Corsair Vengeance LPX 32GB (2x16GB) DDR4 3200MHz CL16 • GIGABYTE Nvidia GTX1080 G1 • FRACTAL DESIGN Define C w/ blue Meshify C front • Corsair RM750x (2018) • OS: Kingston KC2000 1TB GAMES: Intel 660p 1TB DATA: Seagate Desktop 2TB • Acer Predator X34P 34" 3440x1440p 120 Hz IPS curved Ultrawide • Corsair STRAFE RGB Cherry MX Brown • Logitech G502 HERO / Logitech MX Master 3

 

Notebook:  HP Spectre x360 13" late 2018

Core i7 8550U • 16GB DDR3 RAM • 512GB NVMe SSD • 13" 1920x1080p 120 Hz IPS touchscreen • dual Thunderbolt 3

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, leadeater said:

It should still work, every disk in RAID gets a header written to it with the full array configuration. This is used to health check/rebuild/migrate the array. I have zero trust in onbaord RAID though so my onboard RAID use has been extremely limited, about a weeks worth years ago when I was running 4x WD velociraptors. Doing large transfers caused the system to become unstable due to the chipset load, chucked in a spare hw card and no more problems :P.

 

P.S. Using should in the sense that I have no idea other than technically it should, likelihood is a different story.

 

 

I did have issues with transfers over about 300GB at a time to the array when i was intiitally ripping my content (i have a seperate machine for that)  The RAID would appear to the network as offline for just long enough for windows to fail the transfer and have to restart.  Now that the data is there i havent had a single issue.

Linux Daily Driver:

CPU: R5 2400G

Motherboard: MSI B350M Mortar

RAM: 32GB Corsair Vengeance LPX DDR4

HDD: 1TB POS HDD from an old Dell

SSD: 256GB WD Black NVMe M.2

Case: Phanteks Mini XL DS

PSU: 1200W Corsair HX1200

 

Gaming Rig:

CPU: i7 6700K @ 4.4GHz

Motherboard: Gigabyte Z270-N Wi-Fi ITX

RAM: 16GB Corsair Vengeance LPX DDR4

GPU: Asus Turbo GTX 1070 @ 2GHz

HDD: 3TB Toshiba something or other

SSD: 512GB WD Black NVMe M.2

Case: Shared with Daily - Phanteks Mini XL DS

PSU: Shared with Daily - 1200W Corsair HX1200

 

Server

CPU: Ryzen7 1700

Motherboard: MSI X370 SLI Plus

RAM: 8GB Corsair Vengeance LPX DDR4

GPU: Nvidia GT 710

HDD: 1X 10TB Seagate ironwolf NAS Drive.  4X 3TB WD Red NAS Drive.

SSD: Adata 128GB

Case: NZXT Source 210 (white)

PSU: EVGA 650 G2 80Plus Gold

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, vojta.pokorny said:

Meaning that if the options were software RAID5 in Windows Storage Spaces, software RAID5 through disk management, or hardware RAID5 using Intel onboard RAID controller, which one would you recommend? I'm curious because I will be making some changes to my storage configuration in about 6 months.

 

I would aim for RAID 10 if you can.

Software RAIDSs are easy to migrate from system to system, you are not bogged down by hardware incompatibilities. I have tested Onboard "soft RAIDs" vs WSS and you will get marginally better speeds letting Windows look after the array. I haven't had any experience moving one Windows defined array to another install / system however.

 

 

39 minutes ago, MedievalMatt said:

 

I did have issues with transfers over about 300GB at a time to the array when i was intiitally ripping my content (i have a seperate machine for that)  The RAID would appear to the network as offline for just long enough for windows to fail the transfer and have to restart.  Now that the data is there i havent had a single issue.

 

This is common when Windows runs out of RAM to cache into. You will find if you send large amounts of data to your array that your Modified RAM will be you buffer, once that buffer hits around 75% of total memory then it will throttle back to disk access speed. I should add that modified RAM cache is not shown as "used" memory unless viewed in Task Manager/ Resource Manager.

Intel I9-9900k (5Ghz) Asus ROG Maximus XI Formula | Corsair Vengeance 16GB DDR4-4133mhz | ASUS ROG Strix 2080Ti | EVGA Supernova G2 1050w 80+Gold | Samsung 950 Pro M.2 (512GB) + (1TB) | Full EK custom water loop |IN-WIN S-Frame (No. 263/500)

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, vojta.pokorny said:

Meaning that if the options were software RAID5 in Windows Storage Spaces, software RAID5 through disk management, or hardware RAID5 using Intel onboard RAID controller, which one would you recommend? I'm curious because I will be making some changes to my storage configuration in about 6 months.

Storage Spaces, I wouldn't even consider onboard RAID 5 an option ;). Fine for RAID 0 or 1.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Altecice said:

 

This is common when Windows runs out of RAM to cache into. You will find if you send large amounts of data to your array that your Modified RAM will be you buffer, once that buffer hits around 75% of total memory then it will throttle back to disk access speed. I should add that modified RAM cache is not shown as "used" memory unless viewed in Task Manager/ Resource Manager.

This is odd because the systems i was using both had 16GB of DDR3 and it didnt "fill up" that space.  the server had maybe a 20% RAM usage increase, but didnt cache anything beyond that.

 

I chose RAID 5 because i wanted a read performance boost with some redundancy, so i can have more than one uncompressed BluRay stream at a time but also be good to go if a drive fails.

Linux Daily Driver:

CPU: R5 2400G

Motherboard: MSI B350M Mortar

RAM: 32GB Corsair Vengeance LPX DDR4

HDD: 1TB POS HDD from an old Dell

SSD: 256GB WD Black NVMe M.2

Case: Phanteks Mini XL DS

PSU: 1200W Corsair HX1200

 

Gaming Rig:

CPU: i7 6700K @ 4.4GHz

Motherboard: Gigabyte Z270-N Wi-Fi ITX

RAM: 16GB Corsair Vengeance LPX DDR4

GPU: Asus Turbo GTX 1070 @ 2GHz

HDD: 3TB Toshiba something or other

SSD: 512GB WD Black NVMe M.2

Case: Shared with Daily - Phanteks Mini XL DS

PSU: Shared with Daily - 1200W Corsair HX1200

 

Server

CPU: Ryzen7 1700

Motherboard: MSI X370 SLI Plus

RAM: 8GB Corsair Vengeance LPX DDR4

GPU: Nvidia GT 710

HDD: 1X 10TB Seagate ironwolf NAS Drive.  4X 3TB WD Red NAS Drive.

SSD: Adata 128GB

Case: NZXT Source 210 (white)

PSU: EVGA 650 G2 80Plus Gold

Link to comment
Share on other sites

Link to post
Share on other sites

@MedievalMatt How good of an internet connection do you have? If you have good speeds, Crashplan is a pretty affordable backup option that I would highly recommend.

Main Rig: i7-4790 | GTX 1080 | 32GB RAM

Laptop: 2016 Macbook Pro 15" w/ i7-6820HQ, RX 455, 16GB RAM

Others: Apple iPhone XS, ATH-M50X, Airpods, SE215

Link to comment
Share on other sites

Link to post
Share on other sites

I have FiOS currently 150/150... (internally the network is 100% gigabit).  I already did the math the 8TB i have will take like 39 days.  It will actually take more like 4 months to back up the whole things because id run it at night to not interfere with things during the day (we have the equivalent of 5 HD video streams at any given time from 10AM to Midnight most days).

 

i LOL'd at that.  but its doable.

Linux Daily Driver:

CPU: R5 2400G

Motherboard: MSI B350M Mortar

RAM: 32GB Corsair Vengeance LPX DDR4

HDD: 1TB POS HDD from an old Dell

SSD: 256GB WD Black NVMe M.2

Case: Phanteks Mini XL DS

PSU: 1200W Corsair HX1200

 

Gaming Rig:

CPU: i7 6700K @ 4.4GHz

Motherboard: Gigabyte Z270-N Wi-Fi ITX

RAM: 16GB Corsair Vengeance LPX DDR4

GPU: Asus Turbo GTX 1070 @ 2GHz

HDD: 3TB Toshiba something or other

SSD: 512GB WD Black NVMe M.2

Case: Shared with Daily - Phanteks Mini XL DS

PSU: Shared with Daily - 1200W Corsair HX1200

 

Server

CPU: Ryzen7 1700

Motherboard: MSI X370 SLI Plus

RAM: 8GB Corsair Vengeance LPX DDR4

GPU: Nvidia GT 710

HDD: 1X 10TB Seagate ironwolf NAS Drive.  4X 3TB WD Red NAS Drive.

SSD: Adata 128GB

Case: NZXT Source 210 (white)

PSU: EVGA 650 G2 80Plus Gold

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, MedievalMatt said:

-snip-

Yeah, I had a RAID10 array originally on my hardware RAID card but you can't expand it. And to make a larger array you need somewhere to dump the data. I ended up buying two WD Reds to use as a temp drive. I now have six WD Reds in a RAID6 that's for backup since I couldn't think of any other use for the Reds. haha. My main array is a RAID6 of 15 drives.

 

At least you have 150/150. I have a 75/75 and I can't imagine waiting that long to upload that size of data (That and I don't trust the cloud much)...you also have to count the time to download the data back too...

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, scottyseng said:

Yeah, I had a RAID10 array originally on my hardware RAID card but you can't expand it. And to make a larger array you need somewhere to dump the data. I ended up buying two WD Reds to use as a temp drive. I now have six WD Reds in a RAID6 that's for backup since I couldn't think of any other use for the Reds. haha. My main array is a RAID6 of 15 drives.

 

At least you have 150/150. I have a 75/75 and I can't imagine waiting that long to upload that size of data (That and I don't trust the cloud much)...you also have to count the time to download the data back too...

That would be about an 8 - month ordeal to get the data up and back... LOL im just gonna by a couple 5TB drives and be done with it i think.

 

plus its all Blu ray and DVD rips which are...in a grey area of legality lets say.  I still have the physical media so i can replace any data lost.

Linux Daily Driver:

CPU: R5 2400G

Motherboard: MSI B350M Mortar

RAM: 32GB Corsair Vengeance LPX DDR4

HDD: 1TB POS HDD from an old Dell

SSD: 256GB WD Black NVMe M.2

Case: Phanteks Mini XL DS

PSU: 1200W Corsair HX1200

 

Gaming Rig:

CPU: i7 6700K @ 4.4GHz

Motherboard: Gigabyte Z270-N Wi-Fi ITX

RAM: 16GB Corsair Vengeance LPX DDR4

GPU: Asus Turbo GTX 1070 @ 2GHz

HDD: 3TB Toshiba something or other

SSD: 512GB WD Black NVMe M.2

Case: Shared with Daily - Phanteks Mini XL DS

PSU: Shared with Daily - 1200W Corsair HX1200

 

Server

CPU: Ryzen7 1700

Motherboard: MSI X370 SLI Plus

RAM: 8GB Corsair Vengeance LPX DDR4

GPU: Nvidia GT 710

HDD: 1X 10TB Seagate ironwolf NAS Drive.  4X 3TB WD Red NAS Drive.

SSD: Adata 128GB

Case: NZXT Source 210 (white)

PSU: EVGA 650 G2 80Plus Gold

Link to comment
Share on other sites

Link to post
Share on other sites

Anything more than a day i start to get worried about external factors (network outage, power outage, Etc.) interfering.  Ive also had windows updates restart my PC in the middle of a 300GB file transfer before, on multiple occasions.

Linux Daily Driver:

CPU: R5 2400G

Motherboard: MSI B350M Mortar

RAM: 32GB Corsair Vengeance LPX DDR4

HDD: 1TB POS HDD from an old Dell

SSD: 256GB WD Black NVMe M.2

Case: Phanteks Mini XL DS

PSU: 1200W Corsair HX1200

 

Gaming Rig:

CPU: i7 6700K @ 4.4GHz

Motherboard: Gigabyte Z270-N Wi-Fi ITX

RAM: 16GB Corsair Vengeance LPX DDR4

GPU: Asus Turbo GTX 1070 @ 2GHz

HDD: 3TB Toshiba something or other

SSD: 512GB WD Black NVMe M.2

Case: Shared with Daily - Phanteks Mini XL DS

PSU: Shared with Daily - 1200W Corsair HX1200

 

Server

CPU: Ryzen7 1700

Motherboard: MSI X370 SLI Plus

RAM: 8GB Corsair Vengeance LPX DDR4

GPU: Nvidia GT 710

HDD: 1X 10TB Seagate ironwolf NAS Drive.  4X 3TB WD Red NAS Drive.

SSD: Adata 128GB

Case: NZXT Source 210 (white)

PSU: EVGA 650 G2 80Plus Gold

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/23/2017 at 6:16 PM, Oshino Shinobu said:

This is one of the biggest problems with using onboard RAID, it cannot be transferred across to a new board.

You can -totally do- that in some cases.  I moved a pair of RAID1's from P67 to Z77 and finally to X79, and one of the RAID1 arrays was the boot system, it migrated just fine.

 

...That said, that was all while staying within the Intel family, I have no idea about AMD to intel.

Link to comment
Share on other sites

Link to post
Share on other sites

For Supermicro boards for example, anything on C600+ or C200+ chipsets that have onboard RAID capability it can handle system to system RAID transfering and data integrity still remains. Anything pro-sumer. consumer grade stuff RAID sounds a bit janky IMO.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×