Jump to content

Is M.2 Raid + SLI Impossible ? Bottlenecks everywhere.

Hey everyone ! I am trying to build a PC next month. I have been following all the tech news for the last year and when Broadwell-E released I have to say I was kinda disappointed.

Anyway. I have been browser for a few hours now trying to find possible bottlenecks of my build (6700k, MSI titanium , dual 1070s).

This was build around the concept of having:

  • An Sli GPU configuration.

  • Dual M.2 Nvme Drives in Raid 0.

  • And also one or two sata ports left for HDDs.

It seems like there is a huge DMI bottleneck here because the CPU communicates with the PCH (z170 chipset) trough a 3.93 GB/s link meaning raid0 with the nvme drives is kinda pointless (USBs and all the Satas also use this Link) . I have read that you can get about 3 GB/s with the on-board M.2 connectors, while using PCIe cards (I mean the adapters that do M.2 to PCIe3.0x4) speeds go above 5 GB/s.

Please correct me if I am wrong till this point.

So I thought well scratch that I'll go with 2 PCIe adapters raid 0. Bottleneck number 2: Not enough PCIe lanes (16 for the 6700k). 1070 SLI requires 8 + 8 = 16. Done So I thought well Lets go X99 then. Seems like X99 M.2 Nvme is buggy. Raid-ing is very Hard and buggy and booting from Raids is impossible ??

So Finally my question Is it possible to build a system that

1-Has SLI

2-Has Nvme M.2 Raid 0 THAT you can boot Windows from.

3-No bottlenecks ?

Link to comment
Share on other sites

Link to post
Share on other sites

SLI will go through the CPU's PCI-E slots, where as storage will go through the chipset's.

It is also possible for Quad-SLI using some of the chipset slots, so having a RAID config with M.2 units wont do alot, if anything on performance.

 

Edit: Forgot the last part. yes, building a system with SLI, NVME RAID 0 that is bootable (bootable only supported on Skylake/X99), and with not alot of bottlenecks.

Theory is when you know everything but nothing works. Practice is when everything works but you dont know why. In this computer, theory and practice are combined: Nothing works and i dont know why.

 

Atleast i can manage Some things, like my current OC personal best.

Link to comment
Share on other sites

Link to post
Share on other sites

For that many pci peripherals you'd plug, you need 24 PCI lanes, wich forces you to go on X99. Moreover, Raid for a boot drive is not that great because the mobo has to create the array at each boot, slowing the boot down. I advise you to go with only one M.2 drive and have a backup on another disk periodically or 2 other HDD or SSD in Raid0 for your data and programs :)

Sorry for Bad English, Baguette here 

STENDHAL: CPU: i5 6600K | MOBO: ASUS MAXIMUS VIII Ranger | GPU: MSI GTX 1070 GAMING X | RAM: 2x8GB Corsair Vengeance LPX 2400MHz | Case: CM HAF XB | Storage: Kingston UV400 240GB SSD + 750GB WD Blue | CPU Cooler: Hyper 212X | PSU: Corsair RM750X

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, TeeTwo said:

SLI will go through the CPU's PCI-E slots, where as storage will go through the chipset's.

It is also possible for Quad-SLI using some of the chipset slots, so having a RAID config with M.2 units wont do alot, if anything on performance.

Sadly the chipset only handles 4 pci-e lanes putting the max pci lanes "usable" at 20 on z170

Sorry for Bad English, Baguette here 

STENDHAL: CPU: i5 6600K | MOBO: ASUS MAXIMUS VIII Ranger | GPU: MSI GTX 1070 GAMING X | RAM: 2x8GB Corsair Vengeance LPX 2400MHz | Case: CM HAF XB | Storage: Kingston UV400 240GB SSD + 750GB WD Blue | CPU Cooler: Hyper 212X | PSU: Corsair RM750X

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, BbsMentos said:

Sadly the chipset only handles 4 pci-e lanes putting the max pci lanes "usable" at 20 on z170

Ah ok so that is how it works. Thanks.

In this case, Skylake seems like a no-dice on this kind of build.

Theory is when you know everything but nothing works. Practice is when everything works but you dont know why. In this computer, theory and practice are combined: Nothing works and i dont know why.

 

Atleast i can manage Some things, like my current OC personal best.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, TeeTwo said:

Ah ok so that is how it works. Thanks.

In this case, Skylake seems like a no-dice on this kind of build.

Clarification here 

Sorry for Bad English, Baguette here 

STENDHAL: CPU: i5 6600K | MOBO: ASUS MAXIMUS VIII Ranger | GPU: MSI GTX 1070 GAMING X | RAM: 2x8GB Corsair Vengeance LPX 2400MHz | Case: CM HAF XB | Storage: Kingston UV400 240GB SSD + 750GB WD Blue | CPU Cooler: Hyper 212X | PSU: Corsair RM750X

Link to comment
Share on other sites

Link to post
Share on other sites

I wouldn't really worry about a bottleneck unless you are having the drives really hammering the system all the time.

 

I'd also reconsider getting RAID 0 NVMe unless your application really requires that much bandwidth. NVMe barely improves practical application loading performance over SATA at all, at best I've seen 1-2 seconds, and NVMe does not have any better latency than SATA SSDs.

 

EDIT: As some measure of proof that NVMe does not really improve practical application loading:

(a RAM drive is at least an order of magnitude better than NVMe)

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, M.Yurizaki said:

I wouldn't really worry about a bottleneck unless you are having the drives really hammering the system all the time.

 

I'd also reconsider getting RAID 0 NVMe unless your application really requires that much bandwidth. NVMe barely improves practical application loading performance over SATA at all, at best I've seen 1-2 seconds, and NVMe does not have any better latency than SATA SSDs.

 

EDIT: As some measure of proof that NVMe does not really improve practical application loading:

(a RAM drive is at least an order of magnitude better than NVMe)

Yeah I know about that. Also going from an sata SSD to NVMe doesn't really make much of a difference in apps . The thing is that in the future lets say I WILL be copying files to external devices using USB 3.1 10Gbit (If these things become a reality). So connecting to the usb port (which go throught the chipset) and also using an M.2 NVMe raid will bottleneck the transfer speed to half (I think. I dont really know how these chipsets work on that level so thats my guess).

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, BbsMentos said:

For that many pci peripherals you'd plug, you need 24 PCI lanes, wich forces you to go on X99. Moreover, Raid for a boot drive is not that great because the mobo has to create the array at each boot, slowing the boot down. I advise you to go with only one M.2 drive and have a backup on another disk periodically or 2 other HDD or SSD in Raid0 for your data and programs :)

I dont really care about boot speeds . I care about transfer speeds between my system raid to external devices (usb 3.1 and thunderbolt). That's the reason I want this kind of build. Also my main problem with X99 is that its very hard to make a bootable raid. Do u have in mind a reasonably priced motherboard that can do that for sure ?

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Senjar said: Do u have in mind a reasonably priced motherboard that can do that for sure ?

I'm not really expert on x99 boards, look for the brand you like the most and look at reviews :)

Sorry for Bad English, Baguette here 

STENDHAL: CPU: i5 6600K | MOBO: ASUS MAXIMUS VIII Ranger | GPU: MSI GTX 1070 GAMING X | RAM: 2x8GB Corsair Vengeance LPX 2400MHz | Case: CM HAF XB | Storage: Kingston UV400 240GB SSD + 750GB WD Blue | CPU Cooler: Hyper 212X | PSU: Corsair RM750X

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Senjar said:

Yeah I know about that. Also going from an sata SSD to NVMe doesn't really make much of a difference in apps . The thing is that in the future lets say I WILL be copying files to external devices using USB 3.1 10Gbit (If these things become a reality). So connecting to the usb port (which go throught the chipset) and also using an M.2 NVMe raid will bottleneck the transfer speed to half (I think. I dont really know how these chipsets work on that level so thats my guess).

Even if a USB 3.1 device exists, will that even use a drive that can take advantage of that speed? Plan for what you need now and a little more. Planning for the far future usually ends with spending more than you really should have 

 

Also transfer speeds depend on what files you're transferring. If you're backing up your pictures and music, you're never going to see the benefits as latency and file system overhead will kill the transfer rate (I get maybe 40 MB/s over time if I copy my music collection wholesale). That is, the more files you have, the worse your transfer speed will be.

Link to comment
Share on other sites

Link to post
Share on other sites

@Senjar

 

This video shows 3 x Samsung 950 Pro set up in Raid on a Z170 motherboard. It seems that 2 is enough to saturate the DMI link.

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, lee32uk said:

The Z170 provides up to 20 pci-e 3.0 lanes from the chipset.

 

http://www.intel.co.uk/content/www/uk/en/chipsets/performance-chipsets/z170-chipset-diagram.html

This is 20 total lanes, the chipset can't handle more than the CPU 

Sorry for Bad English, Baguette here 

STENDHAL: CPU: i5 6600K | MOBO: ASUS MAXIMUS VIII Ranger | GPU: MSI GTX 1070 GAMING X | RAM: 2x8GB Corsair Vengeance LPX 2400MHz | Case: CM HAF XB | Storage: Kingston UV400 240GB SSD + 750GB WD Blue | CPU Cooler: Hyper 212X | PSU: Corsair RM750X

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks everyone. I will be going with a z170 build to keep the cost down and use 2x 256Gb 950s . According to the pcper video I will be getting about 2.5 Gb/s writes and 3 Gb/s reads which is more than enough .

Thanks again, have a nice day ;) 

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, lee32uk said:

You are wrong. You get a total of 36 lanes. 

 

Want more proof ?

 

http://www.hardocp.com/article/2015/08/12/intel_z170_chipset_summary/#.V2PQiLsrL8A

So running SLI on z170 would take 32 lanes ? or do they have to run on the same chip (both on the proc or the chipset) ?

Sorry for Bad English, Baguette here 

STENDHAL: CPU: i5 6600K | MOBO: ASUS MAXIMUS VIII Ranger | GPU: MSI GTX 1070 GAMING X | RAM: 2x8GB Corsair Vengeance LPX 2400MHz | Case: CM HAF XB | Storage: Kingston UV400 240GB SSD + 750GB WD Blue | CPU Cooler: Hyper 212X | PSU: Corsair RM750X

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, BbsMentos said:

So running SLI on z170 would take 32 lanes ? or do they have to run on the same chip (both on the proc or the chipset) ?

I think that SLI only works on slots that go directly to the CPU (16 lanes). The chipset according to the hardocp article has 20 more lanes so its 16+20 (36 lanes total). The chipset connects with the CPU through the DMI 3.0 with a speed of 8 GT/s (Which is about 4Gb/s i think) . That link is the bottleneck.

 

Edit: You can only do 2 way SLI in z170.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Senjar said:

I think that SLI only works on slots that go directly to the CPU (16 lanes). The chipset according to the hardocp article has 20 more lanes so its 16+20 (36 lanes total). The chipset connects with the CPU through the DMI 3.0 with a speed of 8 GT/s (Which is about 4Gb/s i think) . That link is the bottleneck.

 

Yeah definitely, but still a discover for me, I couldn't think a chipset could handle more (even at lower speeds) PCI-e lanes than the cpu

Sorry for Bad English, Baguette here 

STENDHAL: CPU: i5 6600K | MOBO: ASUS MAXIMUS VIII Ranger | GPU: MSI GTX 1070 GAMING X | RAM: 2x8GB Corsair Vengeance LPX 2400MHz | Case: CM HAF XB | Storage: Kingston UV400 240GB SSD + 750GB WD Blue | CPU Cooler: Hyper 212X | PSU: Corsair RM750X

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, BbsMentos said:

So running SLI on z170 would take 32 lanes ? or do they have to run on the same chip (both on the proc or the chipset) ?

No SLI on Z170 would be x8/x8 so a total of 16 lanes. A single gpu would be x16.

 

The chipset provides additional lanes for things like M.2 SSD's etc.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, lee32uk said:

No SLI on Z170 would be x8/x8 so a total of 16 lanes. A single gpu would be x16.

 

The chipset provides additional lanes for things like M.2 SSD's etc.

That's more clear now, thank you very much sir :)

Sorry for Bad English, Baguette here 

STENDHAL: CPU: i5 6600K | MOBO: ASUS MAXIMUS VIII Ranger | GPU: MSI GTX 1070 GAMING X | RAM: 2x8GB Corsair Vengeance LPX 2400MHz | Case: CM HAF XB | Storage: Kingston UV400 240GB SSD + 750GB WD Blue | CPU Cooler: Hyper 212X | PSU: Corsair RM750X

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×