Jump to content

Asus' ROG RAIDR PCIe SSD

andy_composer

When is the Asus' ROG RAIDR PCIe SSD coming out. Every site says May 2013

 

I want one! :D

 

Motherboard ASUS Dual Scalable Xeon WS C621E SAGE EEB Workstation Motherboard

 CPU - Dual Intel 16 Core Xeon Silver 4216 2nd Gen Scalable

 RAM - Samsung Server RAM 192GB 2666 MHz ECC RDIMM DDR4

Storage - 3 x Samsung 970 EVO PLUS 1TB M.2 NVMe SSD   /    6 x Samsung 870 EVO 500GB   /   2 x Samsung 870 EVO 1TB   /   1 x Seagate IronWolf 4TB

OS - Windows 10 for Workstation

Soundcard - RME HDSPe AIO 

Graphics - PNY NVIDIA T400 2GB

Case Fractal Design Define 7 XL Black

 

Link to comment
Share on other sites

Link to post
Share on other sites

yeah me to... 

If you tell a big enough lie and tell it frequently enough it will be believed.

-Adolf Hitler 

Link to comment
Share on other sites

Link to post
Share on other sites

I hate my ocz revodrive, this one would be a great replacement. One thing I'd like to know about this drive is if it has activity LEDs on it. The blue led indicator on the revodrive kills my black/red theme.

Main rig: i7 3770K @ 4.54, Sapphire R9 290, Sabertooth Z77, 16 GB Mushkin Redline 2133, Lian Li PC-P50R, Seasonic 860xp Platinum, Kingston Hyper X 3K 240GB

freeNAS server: AMD Athlon II 170u 20W, 5 x 3TB WD Red in raid-z1 (12 TB)

media centre: AMD A10-5700, crucial M4 (boot), running XBMC,4 x 3TB WD Red, 3 x 3TB WD green + 2TB green in FlexRAID (17 TB)

Link to comment
Share on other sites

Link to post
Share on other sites

I hate my ocz revodrive, this one would be a great replacement. One thing I'd like to know about this drive is if it has activity LEDs on it. The blue led indicator on the revodrive kills my black/red theme.

Whats wrong with the Revodrive, i was actually considering buying one 

Motherboard ASUS Dual Scalable Xeon WS C621E SAGE EEB Workstation Motherboard

 CPU - Dual Intel 16 Core Xeon Silver 4216 2nd Gen Scalable

 RAM - Samsung Server RAM 192GB 2666 MHz ECC RDIMM DDR4

Storage - 3 x Samsung 970 EVO PLUS 1TB M.2 NVMe SSD   /    6 x Samsung 870 EVO 500GB   /   2 x Samsung 870 EVO 1TB   /   1 x Seagate IronWolf 4TB

OS - Windows 10 for Workstation

Soundcard - RME HDSPe AIO 

Graphics - PNY NVIDIA T400 2GB

Case Fractal Design Define 7 XL Black

 

Link to comment
Share on other sites

Link to post
Share on other sites

Whats wrong with the Revodrive, i was actually considering buying one 

 

Well, I guess people have different experiences with them.  Personally, I had to RMA the first one and had to repair the RAID once (in 2 years).  For my own aesthetic purposes, it has a blue LED activity indicator that is always on so it ruins a colour scheme that isn't blue. I had a heck of a time doing a fresh install of Windows 7 on it because I had to search for the proper drivers.  But this was still foreign territory for pcie SSDs back then and they probably made some strides in the last couple of years.  pcie SSDs are expensive when you could RAID0 two SSDs for cheaper and get better performance.  But I guess instant fast speeds and lack of cables is the big selling point... And I'm still interested in the asus raidr whenever it comes out

Main rig: i7 3770K @ 4.54, Sapphire R9 290, Sabertooth Z77, 16 GB Mushkin Redline 2133, Lian Li PC-P50R, Seasonic 860xp Platinum, Kingston Hyper X 3K 240GB

freeNAS server: AMD Athlon II 170u 20W, 5 x 3TB WD Red in raid-z1 (12 TB)

media centre: AMD A10-5700, crucial M4 (boot), running XBMC,4 x 3TB WD Red, 3 x 3TB WD green + 2TB green in FlexRAID (17 TB)

Link to comment
Share on other sites

Link to post
Share on other sites

Well, I guess people have different experiences with them.  Personally, I had to RMA the first one and had to repair the RAID once (in 2 years).  For my own aesthetic purposes, it has a blue LED activity indicator that is always on so it ruins a colour scheme that isn't blue. I had a heck of a time doing a fresh install of Windows 7 on it because I had to search for the proper drivers.  But this was still foreign territory for pcie SSDs back then and they probably made some strides in the last couple of years.  pcie SSDs are expensive when you could RAID0 two SSDs for cheaper and get better performance.  But I guess instant fast speeds and lack of cables is the big selling point... And I'm still interested in the asus raidr whenever it comes out

Thanks for the info :)

Motherboard ASUS Dual Scalable Xeon WS C621E SAGE EEB Workstation Motherboard

 CPU - Dual Intel 16 Core Xeon Silver 4216 2nd Gen Scalable

 RAM - Samsung Server RAM 192GB 2666 MHz ECC RDIMM DDR4

Storage - 3 x Samsung 970 EVO PLUS 1TB M.2 NVMe SSD   /    6 x Samsung 870 EVO 500GB   /   2 x Samsung 870 EVO 1TB   /   1 x Seagate IronWolf 4TB

OS - Windows 10 for Workstation

Soundcard - RME HDSPe AIO 

Graphics - PNY NVIDIA T400 2GB

Case Fractal Design Define 7 XL Black

 

Link to comment
Share on other sites

Link to post
Share on other sites

 pcie SSDs are expensive when you could RAID0 two SSDs for cheaper and get better performance.

 

You can't though. Striping together 2 SATA 6Gb/s SSD gives you around up to 750MBps as that is the bandwith limit of SATA 3 internally. (6 giga BIT divided by 8 is 0.75 GigaBYTE) It is a small boost from a normally 550MBps device type. You do get more consistent performance though, as when one drive is in a dip the other can pick up the slack.

 

PCIe is capable of much higher speeds.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

You can't though. Striping together 2 SATA 6Gb/s SSD gives you around up to 750MBps as that is the bandwith limit of SATA 3 internally. (6 giga BIT divided by 8 is 0.75 GigaBYTE) It is a small boost from a normally 550MBps device type. You do get more consistent performance though, as when one drive is in a dip the other can pick up the slack.

 

PCIe is capable of much higher speeds.

Sorry but there's a lot of misinformation here. Each device connected to a SATA 3 port gets a dedicated 6Gb/s "pipe" if you will, so two drives in raid 0 on SATA 3 ports gets you an aggregate theoretical bandwidth of 12Gb/s. People have achieved over 1GB/s on a 2 ssd raid 0 array plenty of times.

 

Also, 6Gb/s is signaling rate, not actual device bandwidth. The SATA interface uses 8b/10b encoding, meaning you only get 8/10 of the actual signaling rate as usable bandwidth, or 4.8Gb/s (which, when devided by 8 is .6GB/s or 600MB/s, not .75GB/s as you stated) If you don't believe me, read here http://en.wikipedia.org/wiki/8b/10b_encoding and note that serial ATA is listed under technologies that use 8b/10b encoding.

 

So in reality, the maximum bandwidth of two SSDs in raid 0, both connected to SATA 3 ports, is 9.6Gb/s, 1200MB/s, or 1.2GB/s. Granted there is still a bit of overhead that is involved in sending SATA commands which will cause you a slight loss of of usable bandwidth, but the usable bandwidth will still be over 1GB/s.

 

 

 

You do get more consistent performance though, as when one drive is in a dip the other can pick up the slack.

This is a misconception. PCIe only warrants you more bandwidth and slightly less latency, but ultimately it depends on the implementation. Most PCIe SSDs, (including the raidr) simply use a PCIe sata raid controller that then connects to several ssd controllers, which basically amounts to several sata SSDs in raid 0 connected to a hardware raid controller. This is no different whatsoever than chipset raid, since each ssd controller on the card is still limited to a sata 3 connection to the raid card.

 

The *only* advantage to this type of implementation is that it allows a drive to have more than 2 SSDs in raid 0, but the more controllers you have on there, the greater chance that one of them fails and kills your entire drive.

 

The better alternative is a custom controller that interfaces directly between the flash and the PCIe bus, however these are few and far between and are almost always exorbitantly priced, and they tend to be aimed at the enterprise market.

 

 

 

PCIe is capable of much higher speeds.

This is very true. However. The ROG RADR, the device in question does not come even close to those speeds. (it has been reviewed by a few sites:

http://us.hardware.info/reviews/4263/asus-raidr-express-240gb-pci-express-ssd-review-is-this-the-future

http://chinese.vr-zone.com/62325/asus-pcie-ssd-rog-raidr-express-240gb-hand-on-review-05072013/ )

 

The raidr is honestly an overpriced, underperforming, and undersized device that I see no reason for anyone to buy unless they want to waste money.

 

The key word here is capable. Yes, the PCIe 3.0 16x bus is capable of 160Gb/s bidirectional bandwidth (well a bit less when you factor in 128b/130b encoding that PCIe 3.0 uses), but is there any drive that can leverage that? Nope. And the price of any drive that gets anywhere near that will be well over 10,000$

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

You can't though. Striping together 2 SATA 6Gb/s SSD gives you around up to 750MBps as that is the bandwith limit of SATA 3 internally. (6 giga BIT divided by 8 is 0.75 GigaBYTE) It is a small boost from a normally 550MBps device type. You do get more consistent performance though, as when one drive is in a dip the other can pick up the slack.

 

PCIe is capable of much higher speeds.

 

I don't disagree that the bandwidth on pcie is definitely greater than raided sata 3 6gb.  A PCIe 2.0 4x slot can deliver up to 2GB/sec. But today's SSDs can have better performance in raid-0 over today's pcie SSDs not named revodrive 3 x2.  You can check out Linus' old performance test of the regular revo X2, which I own, which got up to 640/350 on crystal.  There are other reviews on youtube where they got up to 500 MB/s write speed on crystal with the revo x2.  The price is $300 for the X2 and their spec sheet says that it has up to 740/720 read and write.  The asus raidr is spec'd at up to 765/775 (just a bit more than the revo x2), but we'll have to wait for the benchmarks.  Linus also made a raid-0 with 2 plain intel 510 drives and got 990/400, others have been getting over 1GB with higher end SSDs.  The revodrive 3 x2 is spec'd at 1500/1225 so no 2 drives are going to touch that, but its a $600 card for 240 GB.

Main rig: i7 3770K @ 4.54, Sapphire R9 290, Sabertooth Z77, 16 GB Mushkin Redline 2133, Lian Li PC-P50R, Seasonic 860xp Platinum, Kingston Hyper X 3K 240GB

freeNAS server: AMD Athlon II 170u 20W, 5 x 3TB WD Red in raid-z1 (12 TB)

media centre: AMD A10-5700, crucial M4 (boot), running XBMC,4 x 3TB WD Red, 3 x 3TB WD green + 2TB green in FlexRAID (17 TB)

Link to comment
Share on other sites

Link to post
Share on other sites

A PCIe 2.0 4x slot can deliver up to 2GB/sec.

Not quite. A lot of people aren't aware of 8b/10b encoding, but PCIe 2.0 uses it (read here http://en.wikipedia.org/wiki/8b/10b_encoding) and it basically means that for every 10b sent across the bus, only 8 of they are actual data, meaning that a 2.0 4x slot is limited to 1600MB/s of usable bandwidth, which can be beaten handily by 4 sata ssds in raid 0. granted z77 only has 2 SATA 3 ports so PCIe gets the bandwidth advantage again, but if you're using AMD, a board with a third party sata controller, a raid controller, or a z87 board, then 2 or 4xSSDs are the way to go if you want that kind of performance.

 

My point is that PCIe SSDs are really nothing other than a novelty for the standard consumer, as the underwhelming speeds and huge price premiums can be bested by a few sata drives. And the PCIe drives that really perform tend to cost over 5 figures. (and can still be beaten by sata drives off a hardware raid controller)

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

Not quite. A lot of people aren't aware of 8b/10b encoding, but PCIe 2.0 uses it (read here http://en.wikipedia.org/wiki/8b/10b_encoding) and it basically means that for every 10b sent across the bus, only 8 of they are actual data, meaning that a 2.0 4x slot is limited to 1600MB/s of usable bandwidth, which can be beaten handily by 4 sata ssds in raid 0. granted z77 only has 2 SATA 3 ports so PCIe gets the bandwidth advantage again, but if you're using AMD, a board with a third party sata controller, a raid controller, or a z87 board, then 2 or 4xSSDs are the way to go if you want that kind of performance.

I did not know about the 8b/10b encoding, that's good to know!  But I remember Linus with his 8 SSDs in raid zero getting about 1.3GB/sec read or something like that.  They were Vertex 2s however.

 

edit:  Just watched Linus' other demo with 4 intel 510s and got up to 1500 MB/s read

Main rig: i7 3770K @ 4.54, Sapphire R9 290, Sabertooth Z77, 16 GB Mushkin Redline 2133, Lian Li PC-P50R, Seasonic 860xp Platinum, Kingston Hyper X 3K 240GB

freeNAS server: AMD Athlon II 170u 20W, 5 x 3TB WD Red in raid-z1 (12 TB)

media centre: AMD A10-5700, crucial M4 (boot), running XBMC,4 x 3TB WD Red, 3 x 3TB WD green + 2TB green in FlexRAID (17 TB)

Link to comment
Share on other sites

Link to post
Share on other sites

I did not know about the 8b/10b encoding, that's good to know!  But I remember Linus with his 8 SSDs in raid zero getting about 1.3GB/sec read or something like that.  They were Vertex 2s however.

Linus has refurbished corsair SSDs :P Which are bloody old and bloody slow, but if you throw enough of them at the problem you can get speeds like that.

 

They were also only of the 120GB flavor, crippling them even more.

 

Just for shits and giggles...

An lsi 9211-4i supports raid 0 and costs $175: http://www.newegg.com/Product/Product.aspx?Item=N82E16816118114

A samsung 840 pro costs $250 and gets sequential read and write speeds over 500MB/s, 4 of them would cost about $1,000: http://www.newegg.com/Product/Product.aspx?Item=N82E16820147193

 

This setup would get you near 2GB/s reads and writes and would cost about $1175

 

A RevoDrive 3 x2 960GB costs about $1600 and has 1500MB/s reads and 1300MB/s writes advertised: http://www.newegg.com/Product/Product.aspx?Item=N82E16820227742

 

I dunno about you but my money is on the raid array with this one :P

 

But, if we're throwing price completely out the window... Go have a look at fusion-IOs products... 10TB ssd anyone? http://www.fusionio.com/products/iodrive-octal/

 

Oh but it costs 100k...  :(

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know how the price/performance of it is going to stack up against a dedicated RAID card + standard SSD's but damn it looks good!183536.jpg

Intel 3960X (5GHz), Asus P9X79 Pro, KFA2 GTX580 SLI, Asus D2X, 16GB Dominator Platinum (1866Mhz), Corsair 600T, Corsair AX1200

 

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know how the price/performance of it is going to stack up against a dedicated RAID card + standard SSD's but damn it looks good!

Now that I most definitely can not argue with :P

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...
 
 
 
 

Why is there so much hype for these "PCIe" SSDs?

 

These things aren't more than a pair of glorified SSDs with a kinda crappy (or as Marvell puts it, "cost-effective solution" SATA 6.0Gbps controller on a board connected to a PCIe slot. There's no magic to it, the RAIDr is just that, a couple of SF-2281 controller-based SSDs wired to Marvell 88SE9230 SATA 6.0Gbps PCIe controller, set to RAID0.

 

A pair of good, proper SSDs in RAID0 (for example, 840 Pros) are, in every possible field way better than this:

 - They are faster than this device. A RAID0 of 840 Pros can deliver up to 1GBps/800MBps read/write speeds with sequential, compressible data. (Source)

 - This set-up does not use a Marvel controller. It uses Intel controllers, which are one of the best storage controllers around, and the best for consumer desktop boot drives and SSDs. (Source)

 - The storage controller, being integrated on the PCH, prevents any kind of motherboard incompatibility issues.

 - It is cheaper. They are asking for EUR 440 for the 240GB version, it seems. I did not even look for the cheapest eStore, but here in Spain you can get a couple of 128GB Pros (for a total of 256GB when RAID'd) for just EUR 238. And you already have the controller, it's integrated on your board. The RAIDr solution is 84% more expensive than the faster, safer and more stable SSD RAID solution.

 - It does not occupy a PCIe slot. This seems to be the reason people are all over this, because PCIe is faster than SATA and whatever. But this occupies a slot, you can't use this on a mITX board with a discrete card, or on a mATX board with a multi-GPU solution. As Linus and Slick commented on the July 19th WAN show (I think, not sure), there are little to no reasons to go with bigger boards as there are little to no use for the slots nowadays, only graphics, audio and RAID are the useful things to put there. You can (and, IMHO, should) use USB audio solutions such as NwAvGuy's amazing ODA+ODAC (distributed by JDS Labs I think), and you should not have to use RAID on a desktop, if you need such a massive amount of storage you should build a proper server.

 

 

Until we get native PCIe SSD controllers (not SATA interfaces), there's no point on buying these. No point at all. So save your money and get a bazillion SSDs if you need >GBps seq. speeds or >100K IOPS.

 

[/rant]

 

Maximus V Gene | i5-2500k 4.8GHz | 16GiB | GTX470 SLi @ 800MHz| m4 128GB + 240GB 2.5" HDD | Arch Linux + Windows 8


800D + HX850v2 | EK Supreme-HF v2 | 2x EK FC-470GTX (serial) | Black Ice GTX 360 + 3x AP-15 | U2311h | 6Gv2 + G500 | T50-RP

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 weeks later...

yeh but when you've used all your SATA ports like me the PCI - Express SSD's (like the ASUS one) look very appealing :)

Motherboard ASUS Dual Scalable Xeon WS C621E SAGE EEB Workstation Motherboard

 CPU - Dual Intel 16 Core Xeon Silver 4216 2nd Gen Scalable

 RAM - Samsung Server RAM 192GB 2666 MHz ECC RDIMM DDR4

Storage - 3 x Samsung 970 EVO PLUS 1TB M.2 NVMe SSD   /    6 x Samsung 870 EVO 500GB   /   2 x Samsung 870 EVO 1TB   /   1 x Seagate IronWolf 4TB

OS - Windows 10 for Workstation

Soundcard - RME HDSPe AIO 

Graphics - PNY NVIDIA T400 2GB

Case Fractal Design Define 7 XL Black

 

Link to comment
Share on other sites

Link to post
Share on other sites

Linus has refurbished corsair SSDs :P Which are bloody old and bloody slow, but if you throw enough of them at the problem you can get speeds like that.

 

They were also only of the 120GB flavor, crippling them even more.

 

Just for shits and giggles...

An lsi 9211-4i supports raid 0 and costs $175: http://www.newegg.com/Product/Product.aspx?Item=N82E16816118114

A samsung 840 pro costs $250 and gets sequential read and write speeds over 500MB/s, 4 of them would cost about $1,000: http://www.newegg.com/Product/Product.aspx?Item=N82E16820147193

 

This setup would get you near 2GB/s reads and writes and would cost about $1175

 

A RevoDrive 3 x2 960GB costs about $1600 and has 1500MB/s reads and 1300MB/s writes advertised: http://www.newegg.com/Product/Product.aspx?Item=N82E16820227742

 

I dunno about you but my money is on the raid array with this one :P

 

But, if we're throwing price completely out the window... Go have a look at fusion-IOs products... 10TB ssd anyone? http://www.fusionio.com/products/iodrive-octal/

 

Oh but it costs 100k...  :(

 

I was asking about this in anther thread and seems like you guys are on the same topic. My question is would you be able to get 1.2gbs with a cheapper RAID card and insted of 4x 840 pro's use 4x 64Gb SSDs.

 

Whish 64Gb SSD would you recomomnd for such a project and what RAID card under $150 would you recommond. I would like to make a scalable PCIs SSD

Umbra Aqua - Mac Pro Mod

Phthonos - The Xbox Killer?

Link to comment
Share on other sites

Link to post
Share on other sites

ocz RevoDrive 3 x2  absolute shit product i never buy or touch ocz product again  :angry:

 

But now have Asus raidr pci-e cart for 1 week and i loved it's working perfectly 

 

Asus RAIDR Express PCI Express SSD 240 GB :wub: 
Link to comment
Share on other sites

Link to post
Share on other sites

 

ocz RevoDrive 3 x2  absolute shit product i never buy or touch ocz product again  :angry:

 

But now have Asus raidr pci-e cart for 1 week and i loved it's working perfectly 

 

Asus RAIDR Express PCI Express SSD 240 GB :wub: 

 

 

It seems that it doesn't perform that well against the best SSD's in the market.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 year later...

Just for shits and giggles...

An lsi 9211-4i supports raid 0 and costs $175: http://www.newegg.com/Product/Product.aspx?Item=N82E16816118114

A samsung 840 pro costs $250 and gets sequential read and write speeds over 500MB/s, 4 of them would cost about $1,000: http://www.newegg.com/Product/Product.aspx?Item=N82E16820147193

 

This setup would get you near 2GB/s reads and writes and would cost about $1175

Really?  I don't think you would see even remotely close to 2GB/s with 4 SSDs/ probably under 1GB/s.  

If you RAID 0 two Samsung 840 Pro you will see performance loss compared to a single drive in daily usage (rand 4k reads/writes)

You are talking synthetic benchmarks on sequential reads that do not reflect real world performance. 

But hey, if you build your computer to get high numbers on benchmarks, and don't care how it performs for actual usage then, yeh, RAID it up.

 

I do agree though, the RAIDR is a waste of money.  It won't match the performance of an 840PRO for most situations.

PCIe SSD's using SF-2281 = goodbye to all your data at some point. (especially with ocz revodrive350 and gskill phoenx blade - 4 SF-2281 = 4x the chance of death)

Link to comment
Share on other sites

Link to post
Share on other sites

Really?  I don't think you would see even remotely close to 2GB/s with 4 SSDs/ probably under 1GB/s.  

If you RAID 0 two Samsung 840 Pro you will see performance loss compared to a single drive in daily usage (rand 4k reads/writes)

You are talking synthetic benchmarks on sequential reads that do not reflect real world performance. 

But hey, if you build your computer to get high numbers on benchmarks, and don't care how it performs for actual usage then, yeh, RAID it up.

 

I do agree though, the RAIDR is a waste of money.  It won't match the performance of an 840PRO for most situations.

PCIe SSD's using SF-2281 = goodbye to all your data at some point. (especially with ocz revodrive350 and gskill phoenx blade - 4 SF-2281 = 4x the chance of death)

bro, ur answering to a 1.5 year old post :DDDDDDDD

+°´°+,¸¸,+°´°~ Glorious PC master gaming race :wub: ~°´°+,¸¸,+°´°+
BigBox: Asus P8Z77-V, 3570k, 8GB Ram, Intel 180GB & Sammy 750GB, HD4000, W7
PiBox: Rasberry Pi, BCM @ 1225Mhz ^_^ , 256MB Ram, 16GB Storage, pIO, Raspbian

Link to comment
Share on other sites

Link to post
Share on other sites

I was asking about this in anther thread and seems like you guys are on the same topic. My question is would you be able to get 1.2gbs with a cheapper RAID card and insted of 4x 840 pro's use 4x 64Gb SSDs.

 

Whish 64Gb SSD would you recomomnd for such a project and what RAID card under $150 would you recommond. I would like to make a scalable PCIs SSD

 

DO NOT respond to old threads. Please start new, more relevant one.

Forum Links - Community Standards, Privacy Policy, FAQ, Features Suggestions, Bug and Issues.

Folding/Boinc Info - Check out the Folding and Boinc Section, read the Folding Install thread and the Folding FAQ. Info on Boinc is here. Don't forget to join team 223518. Check out other users Folding Rigs for ideas. Don't forget to follow the @LTTCompute for updates and other random posts about the various teams.

Follow me on Twitter for updates @Whaler_99

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.

×