Jump to content

LSI Mega 9240-8i SAS SATA help

hi there just been looking to get a raid card ( cheap one ) but i just dont know if this card will work with my pc

so will the LSI Mega 9240-8i SAS SATA work with a 6700k, Asus Z170-DELUXE, ?

Thanks 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, agccga said:

hi there just been looking to get a raid card ( cheap one ) but i just dont know if this card will work with my pc

so will the LSI Mega 9240-8i SAS SATA work with a 6700k, Asus Z170-DELUXE, ?

Thanks 

In short: if using a low-end RAID card (without cache), do yourself a favor and switch to software RAID. If using a mid-to-high-end card (with BBU or NVRAM), then hardware is often (but not always! see below) a good choice.

 

Long answer: when computing power was limited, hardware RAID cards had the significantly advantage to offload parity/syndrome calculation for RAID schemes involving them (RAID 3/4/5, RAID6, ecc).

However, with the ever increasing CPU performance, this advantage basically disappeared: even my laptop's ancient CPU (Core i5 M 520, Westmere generation) have XOR performance of over 4 GB/s and RAID-6 syndrome performance over 3 GB/s over a single execution core.

The advantage that hardware RAID maintains today is the presence of a power-loss protected DRAM cache, in the form of BBU or NVRAM. This protected cache give very low latency for random write access (and reads that hit) and basically transform random writes into sequential writes. A RAID controller without such a cache is near useless. Moreover, some low-end RAID controllers do not only came without a cache, but forcibly disable the disk's private DRAM cache, leading to slower performance than without RAID card at all. An example are DELL's PERC H200 and H300 cards: if newer firmware has not changed that, they totally disable the disk's private cache (and it can not be re-enabled while the disks are connected to the RAID controller). Do a favor yourself and do not, ever, never buy such controllers. While even higher-end controller often disable disk's private cache, they at least have they own protected cache - making HDD's (but not SSD's!) private cache somewhat redundant.

This is not the end, though. Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSD really need a fast private cache for efficient FLASH page programming/erasing. And while some (most?) controller let you re-enable disk's private cache (eg: PERC H700/710/710P let the user re-enable it), if that private cache is not write-protected you risks to lose data in case of power loss. The exact behavior really is controller and firmware dependent (eg: on a DELL S6/i with 256 MB WB cache and enabled disk's cache, I had no losses during multiple, planned power loss testing), giving uncertainty and much concerns.

Open source software RAIDs, in the other hand, are much more controllable beasts - their software is not enclosed inside a proprietary firmware, and have well-defined metadata patterns and behaviors. Software RAID make the (right) assumption that disk's private DRAM cache is not protected, but at the same time it is critical for acceptable performance - so they typically do not disable it, rather they use ATA FLUSH / FUA commands to be certain that critical data land on stable storage. As they often run from the SATA ports attached to the chipset SB, their bandwidth is very good and driver support is excellent.

However, if used with mechanical HDDs, synchronized, random write access pattern (eg: databases, virtual machines) will greatly suffer compared to an hardware RAID controller with WB cache. On the other hand, when used with SSDs, they often excel and give results even higher than what achievable with hardware RAID cards.

Also consider that software RAIDs are not all created equals. Windows software RAID has a bad reputation, performance wise, and even Storage Space seems not too different. Linux MD Raid is exceptionally fast and versatile, but Linux I/O stack is composed of multiple independent pieces that you need to carefully understand to extract maximum performance. ZFS ZRAID is extremely advanced, but if not configured correctly it will give you very poor IOPs. Moreover, it need a fast SLOG device for synchronous write handling (ZIL).

Bottom line:

  1. if your workloads is not synchronized random write sensitive, you don't need a RAID card
  2. if you need a RAID card, do not buy a RAID controller without WB cache
  3. if you plan to use SSD, software RAID is preferred. My best choice is Linux MD Raid. If you need ZFS advanced features, go with ZRAID but carefully think about your vdev setup!
  4. if you, even using SSD, really need an hardware RAID card, use SSDs with write-protected caches (Micron M500/550/600 have partial protection - not really sufficient but better than nothing - while Intel DC and S series have full power loss protection, and the same can be said for enterprise Samsung SSDs)
  5. if you need RAID6 and you will use normal, mechanical HDDs, consider to buy a fast RAID card with 512 MB (or more) WB cache. RAID6 has high write performance penaly, and a properly-sized WB cache can at least provide a fast intermediate storage for small synchronous writes (eg: filesystem journal).
  6. if you need RAID6 with HDDs but you can't / don't want to buy an hardware RAID cards, carefully think about your software RAID setup. For example, a possible solution with Linux MD Raid is to use two arrays: a small RAID10 array for journal writes / DB logs, and a RAID6 array for raw storage (as fileserver). On the other hand, software RAID5/6 with SSDs is very fast, so you probably don't need a RAID card for an all-SSDs setup.

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, agccga said:

-snip-

It will probably work, but do note that "Cheap" and hardware RAID card don't usually go in the same sentence unless it's a entry model you shouldn't consider due to the lack of a battery cache, or that it's outdated to be of use (SAS1 / 3Gbs). Also note that SAS to Sata Breakout cables aren't exactly cheap to buy.

 

Also, if you are getting on just because you need more ports, going with a software solution and getting a SAS HBA card would work well for you.

 

I personally own a LSI MegaRAID 9260-8i CV, but that was because back when I bought it, I thought hardware RAID was the proper way to go / ZFS and BTRFS didn't hit the mainstream at the time. You also have to consider maintenance costs for the RAID card as well, usually the batteries only live 2-3 years. Mine died after 2 years, even with my light home use.

 

Finally, don't forget that if you go hardware RAID, you are locked into that manufacturer. For me, if my battery goes out again, I have no real choice but to go buy a new RAID card again (I was lucky to even find a used battery pack for mine on ebay).

 

19 minutes ago, Bwithnewcast said:

 Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSD really need a fast private cache for efficient FLASH page programming/erasing.

Don't forget that most hardware RAID cards do not have TRIM support so SSD arrays tend to slow down over time. Sadly, getting a SSD array to work properly usually involves buying a key for some software feature of the card that gets unlocked (I know LSI's CacheCade SSD caching system has to be bought).

Link to comment
Share on other sites

Link to post
Share on other sites

maybe a bit more info will help is needed 

im going to be running windows 10,

was looking to for a raid card would not slow down the cpu/pc/gup as im useing to render ect as well as a home nas 

i did buy 3 hp p410 512mb but couldnt get them working on the board 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, agccga said:

maybe a bit more info will help is needed 

im going to be running windows 10,

was looking to for a raid card would not slow down the cpu/pc/gup as im useing to render ect as well as a home nas 

i did buy 3 hp p410 512mb but couldnt get them working on the board 

Either you have to invest a lot of money on a real raid settup or you can use software raid which i have on a pc with a core 2 duo and it is not making it worse

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, agccga said:

maybe a bit more info will help is needed 

im going to be running windows 10,

was looking to for a raid card would not slow down the cpu/pc/gup as im useing to render ect as well as a home nas 

i did buy 3 hp p410 512mb but couldnt get them working on the board 

Either you have to invest a lot of money on a real raid settup or you can use software raid which i have on a pc with a core 2 duo and it is not making it worse

Link to comment
Share on other sites

Link to post
Share on other sites

ok what raid card would work then on windows 10 and for software raid i need SFF 8087 sas ends 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, agccga said:

ok what raid card would work then on windows 10 and for software raid i need SFF 8087 sas ends 

You need SFF-8087 to 4x SATA for any SAS HBA or RAID card whether you are doing hardware RAID or software RAID.

 

Have you considered using Storage Spaces Two-Way Mirror? This unlike it's name suggests supports many disks not two and for simplicity sake works like RAID 10.

 

As for the HP P410's you couldn't get working that is rather common on consumer desktop motherboards, they just aren't designed to use RAID cards and server RAID cards have picky firmware and boot hookins. You could trying updating the firmware of them but you'll actually have to get it to boot in the system, which is your problem in the first place.

 

HP cards are typically the least likely to work in a non HP system, you'll have better luck with IBM/Lenovo/Dell cards than HP.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, agccga said:

so will the LSI Mega 9240-8i SAS SATA work with a 6700k, Asus Z170-DELUXE, ?

Not necessarily - I recall seeing a client having issues trying to run a similar 92xx series Intel branded LSI card in a Z77 (or Z87?) Asus board a few years back. Put the thing in a Gigabyte board and it worked without issue.

 

As explained in much greater detail above me, why do you require a RAID controller?

Link to comment
Share on other sites

Link to post
Share on other sites

thank you i have a 24 bay case i have all the bits but no raid car im looking for raid 5 or 6 really and dont want my pc/cpu/gpu slowing down thats why i thought hardware raid

 

16473240_10211933011310441_4331863636236684000_n.jpg

16508554_10211933011190438_1491271529049100797_n.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

but if software raid is good then what card could i ues ? thats where im stuck i have about £100 to £150 to spend on the last bits

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Windspeed36 said:

-snip-

I ran a LSI 9260-8i RAID card in both my P67 and Z77 Asus board without issue. I think you have the best chance of it working if it's a LSI or made by the brand itself. However, I do know that some client boards will refuse to pick up RAID cards...

 

18 minutes ago, agccga said:

but if software raid is good then what card could i ues ? thats where im stuck i have about £100 to £150 to spend on the last bits

If you want to use software RAID, just use a HBA SAS card. It'll give you more ports. It's pretty much just a pass through SAS card.

I'm very doubtful that your system couldn't manage a RAID configuration and render at the same time.

 

Question though, what ports are on the backplane of that server chassis? Or what model is that server chassis?

Link to comment
Share on other sites

Link to post
Share on other sites

i was looking at the lsi 9210 and lsi 9211 but there more money then the lsi 9240 unless you know a cheaper one ??

 

this is the cases i got

dont know if that helped

 

 

 

s-l1600.jpg

s-l1600 (1).jpg

s-l1600 (2).jpg

Link to comment
Share on other sites

Link to post
Share on other sites

i 3d printed the middle bit so i could put a 240mm rad and some fans on the side ect 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, agccga said:

i was looking at the lsi 9210 and lsi 9211 but there more money then the lsi 9240 unless you know a cheaper one ??

Have a look for IBM M1015 either flashed to IT mode or you can do that yourself, very common on ebay.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Bwithnewcast said:

In short: if using a low-end RAID card (without cache), do yourself a favor and switch to software RAID. If using a mid-to-high-end card (with BBU or NVRAM), then hardware is often (but not always! see below) a good choice.

 

Long answer: when computing power was limited, hardware RAID cards had the significantly advantage to offload parity/syndrome calculation for RAID schemes involving them (RAID 3/4/5, RAID6, ecc).

However, with the ever increasing CPU performance, this advantage basically disappeared: even my laptop's ancient CPU (Core i5 M 520, Westmere generation) have XOR performance of over 4 GB/s and RAID-6 syndrome performance over 3 GB/s over a single execution core.

The advantage that hardware RAID maintains today is the presence of a power-loss protected DRAM cache, in the form of BBU or NVRAM. This protected cache give very low latency for random write access (and reads that hit) and basically transform random writes into sequential writes. A RAID controller without such a cache is near useless. Moreover, some low-end RAID controllers do not only came without a cache, but forcibly disable the disk's private DRAM cache, leading to slower performance than without RAID card at all. An example are DELL's PERC H200 and H300 cards: if newer firmware has not changed that, they totally disable the disk's private cache (and it can not be re-enabled while the disks are connected to the RAID controller). Do a favor yourself and do not, ever, never buy such controllers. While even higher-end controller often disable disk's private cache, they at least have they own protected cache - making HDD's (but not SSD's!) private cache somewhat redundant.

This is not the end, though. Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSD really need a fast private cache for efficient FLASH page programming/erasing. And while some (most?) controller let you re-enable disk's private cache (eg: PERC H700/710/710P let the user re-enable it), if that private cache is not write-protected you risks to lose data in case of power loss. The exact behavior really is controller and firmware dependent (eg: on a DELL S6/i with 256 MB WB cache and enabled disk's cache, I had no losses during multiple, planned power loss testing), giving uncertainty and much concerns.

Open source software RAIDs, in the other hand, are much more controllable beasts - their software is not enclosed inside a proprietary firmware, and have well-defined metadata patterns and behaviors. Software RAID make the (right) assumption that disk's private DRAM cache is not protected, but at the same time it is critical for acceptable performance - so they typically do not disable it, rather they use ATA FLUSH / FUA commands to be certain that critical data land on stable storage. As they often run from the SATA ports attached to the chipset SB, their bandwidth is very good and driver support is excellent.

However, if used with mechanical HDDs, synchronized, random write access pattern (eg: databases, virtual machines) will greatly suffer compared to an hardware RAID controller with WB cache. On the other hand, when used with SSDs, they often excel and give results even higher than what achievable with hardware RAID cards.

Also consider that software RAIDs are not all created equals. Windows software RAID has a bad reputation, performance wise, and even Storage Space seems not too different. Linux MD Raid is exceptionally fast and versatile, but Linux I/O stack is composed of multiple independent pieces that you need to carefully understand to extract maximum performance. ZFS ZRAID is extremely advanced, but if not configured correctly it will give you very poor IOPs. Moreover, it need a fast SLOG device for synchronous write handling (ZIL).

Bottom line:

  1. if your workloads is not synchronized random write sensitive, you don't need a RAID card
  2. if you need a RAID card, do not buy a RAID controller without WB cache
  3. if you plan to use SSD, software RAID is preferred. My best choice is Linux MD Raid. If you need ZFS advanced features, go with ZRAID but carefully think about your vdev setup!
  4. if you, even using SSD, really need an hardware RAID card, use SSDs with write-protected caches (Micron M500/550/600 have partial protection - not really sufficient but better than nothing - while Intel DC and S series have full power loss protection, and the same can be said for enterprise Samsung SSDs)
  5. if you need RAID6 and you will use normal, mechanical HDDs, consider to buy a fast RAID card with 512 MB (or more) WB cache. RAID6 has high write performance penaly, and a properly-sized WB cache can at least provide a fast intermediate storage for small synchronous writes (eg: filesystem journal).
  6. if you need RAID6 with HDDs but you can't / don't want to buy an hardware RAID cards, carefully think about your software RAID setup. For example, a possible solution with Linux MD Raid is to use two arrays: a small RAID10 array for journal writes / DB logs, and a RAID6 array for raw storage (as fileserver). On the other hand, software RAID5/6 with SSDs is very fast, so you probably don't need a RAID card for an all-SSDs setup.

 

What he said.

 

Stay away from the cards without any cache.

 

Is there a SAS backpanel on your server?  You just direct attach with sata cables, in which case you have a problem.

 

Link to comment
Share on other sites

Link to post
Share on other sites

they are SFF 8087 of the hhd bays so i would need a  SFF 8087 raid card

Link to comment
Share on other sites

Link to post
Share on other sites

no 6 buy only need to fill 4 or 5. 1 per 4 sata hdds

Link to comment
Share on other sites

Link to post
Share on other sites

Why that case??   I always find it interesting that they are so popular,  I normally stick to the supermicro stuff for that sort of application.

 

Just get a decent 4 port raid card. LSI 9260-4i or newer.  You can pickup the cachecade hardware keys used for cheap, to add an SDD caching layer if you want better performance.

 

You however also need to a SAS expander. there is an HP one that is really popular. supports 24 drives.  plenty on ebay.

https://www.hpe.com/h20195/v2/getpdf.aspx/c04111623.pdf?ver=15

https://hardforum.com/threads/hp-sas-expander-owners-thread.1484614/ 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Erkel said:

Why that case??   I always find it interesting that they are so popular,  I normally stick to the supermicro stuff for that sort of application.

 

Just get a decent 4 port raid card. LSI 9260-4i or newer.  You can pickup the cachecade hardware keys used for cheap, to add an SDD caching layer if you want better performance.

 

You however also need to a SAS expander. there is an HP one that is really popular. supports 24 drives.  plenty on ebay.

https://www.hpe.com/h20195/v2/getpdf.aspx/c04111623.pdf?ver=15

https://hardforum.com/threads/hp-sas-expander-owners-thread.1484614/ 

 

 

But would a SAS sxpander slow down the pcie speed ..

like if i had 4 Raid cards 24 hdd so 4xpcie

where is i had 1 raid card and then 24 hdds that wold olny be 1xpcie ?

 

i dont really know a lot about this thats why im here i getting nothink but problems :-/ 

but i will look for a cheap 9260 and a SAS expander 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, agccga said:

-snip-

Trust me, one HBA card and SAS expander will easily handle 24 drives. A two port SAS HBA card has eight lanes of 6Gb/s. It takes quite a large amount of hard drives to bog it down.

 

You don't need a HBA card per eight hard drives...that's simply overkill.

 

I have 15 drives on my RAID card and exceed 1GB/s...

 

9 hours ago, Erkel said:

Why that case??   I always find it interesting that they are so popular,  I normally stick to the supermicro stuff for that sort of application.

Yeah, I have to admit, supermicro spoiled me with the built in SAS expander for my home NAS.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×