Jump to content

M.2 Question

F Bomb29

When using a CPU that has 28 PCIe Lanes (6800k) 2 GPUs (1080s) and wanting to put in an m.2 ssd, will there be any problem with that? If you need it this is my PCIe bandwidth table out of my mobo manual. http://imgur.com/a/oWFj2 

 

Thanks :)

S y s t e m  S p e c s

CPU: i7 6800k @ 4.4Ghz | MoBo: MSI X99A Gaming Pro Carbon | RAM: 64gb G.Skill Neo RGB 3600 | 

  GPU:  2x GTX 1080  | Storage: Samsung 850 EVO 500gb + WD Black 1TB |

PSU: Corsair RM1000x | Cooling: Custom Loop | Monitors: Asus ROG Swift PG278Q + BenQ RL2455HM  | 

Keyboard: Razer BlackWidow ChromMouse: Razer DeathAdder Elite

Case: CaseLabs Merlin SM8 | Extra: NZXT Hu+Logitech G27, HyperX Cloud II |

| Build Log: https://linustechtips.com/main/topic/671484-project-murder |

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, anybodykek said:

I don't think so. Why would PCIe lanes bottleneck M.2 slots????

I'm pretty sure that they cant be bottlenecked, I'm just wondering if I have enough PCIe lanes in my CPU to put one in.

S y s t e m  S p e c s

CPU: i7 6800k @ 4.4Ghz | MoBo: MSI X99A Gaming Pro Carbon | RAM: 64gb G.Skill Neo RGB 3600 | 

  GPU:  2x GTX 1080  | Storage: Samsung 850 EVO 500gb + WD Black 1TB |

PSU: Corsair RM1000x | Cooling: Custom Loop | Monitors: Asus ROG Swift PG278Q + BenQ RL2455HM  | 

Keyboard: Razer BlackWidow ChromMouse: Razer DeathAdder Elite

Case: CaseLabs Merlin SM8 | Extra: NZXT Hu+Logitech G27, HyperX Cloud II |

| Build Log: https://linustechtips.com/main/topic/671484-project-murder |

Link to comment
Share on other sites

Link to post
Share on other sites

As stated in the manual, with a 28-lane CPU, 2 GPU's, and an M.2 SSD, you're looking at running x16, x8, and x4 link widths respectively. There shouldn't be any issue with running the second GPU in x8 mode, as AFAIK we still haven't got a GPU that can exceed that bandwidth. If there bizarrely is some kind of issue, you can run the first GPU in x8 mode in your UEFI setup and all should be good.

 

Just to clarify:

 

  • CPU: 28 lanes
    • GPU1: 16 lanes
    • GPU2: 8 lanes
    • M.2: 4 lanes
      • Total: 28 lanes
  • PCH: Up to 8 lanes connected to CPU via DMI 2.0 x4 (switched)
    • Ethernet
    • Storage
    • Etc

Also, the motherboard should automatically allocate the GPU's to x16/x8 when the M.2 SSD is installed.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Runefox said:

As stated in the manual, with a 28-lane CPU, 2 GPU's, and an M.2 SSD, you're looking at running x16, x8, and x4 link widths respectively. There shouldn't be any issue with running the second GPU in x8 mode, as AFAIK we still haven't got a GPU that can exceed that bandwidth. If there bizarrely is some kind of issue, you can run the first GPU in x8 mode in your UEFI setup and all should be good.

 

Just to clarify:

 

  • CPU: 28 lanes
    • GPU1: 16 lanes
    • GPU2: 8 lanes
    • M.2: 4 lanes
      • Total: 28 lanes
  • PCH: Up to 8 lanes connected to CPU via DMI 2.0 x4 (switched)
    • Ethernet
    • Storage
    • Etc

Alright, thanks man 

S y s t e m  S p e c s

CPU: i7 6800k @ 4.4Ghz | MoBo: MSI X99A Gaming Pro Carbon | RAM: 64gb G.Skill Neo RGB 3600 | 

  GPU:  2x GTX 1080  | Storage: Samsung 850 EVO 500gb + WD Black 1TB |

PSU: Corsair RM1000x | Cooling: Custom Loop | Monitors: Asus ROG Swift PG278Q + BenQ RL2455HM  | 

Keyboard: Razer BlackWidow ChromMouse: Razer DeathAdder Elite

Case: CaseLabs Merlin SM8 | Extra: NZXT Hu+Logitech G27, HyperX Cloud II |

| Build Log: https://linustechtips.com/main/topic/671484-project-murder |

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, anybodykek said:

I don't think so. Why would PCIe lanes bottleneck M.2 slots????

Because m.2 uses pcie lanes on the cpu

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, 8-Bit Ninja said:

Because m.2 uses pcie lanes on the cpu

 

19 minutes ago, anybodykek said:

I don't think so. Why would PCIe lanes bottleneck M.2 slots????

Certain M.2 devices use PCIe, but some M.2 SSDs and slots are SATA III, and some can support both.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm going to toot my own horn.

 

But the tl;dr version is NVMe uses its own PCIe lanes except on LGA 2011v3 motherboards.

 

EDIT: Okay so I had a closer look at the system you were wanting to use.

 

NVMe using 4 PCIe lanes will go through the processor's PCIe lanes on X99 chipsets. Otherwise they'll go through 2 lanes. However NVMe drives using 2 lanes are keyed for 2 lanes, so chances are they're only built for 2 lanes.

 

Either way, there's no appreciable performance improvement for general use when using an NVMe drive over a SATA SSD from what I've experienced.

Link to comment
Share on other sites

Link to post
Share on other sites

Even in the case of a PCI-E based NVMe M.2 SSD, the CPU still has enough lanes for it along with dual GPU's, one in x16, one in x8, while the SSD takes up x4. The PCH's DMI link is entirely separate and has its own PCI-E lanes with which to handle the rest of the system's needs. As @M.Yurizaki points out in their linked post, this X99 board opts for using the CPU's PCI-E lanes for NVMe, which works out beautifully since you can't run x16 and x16 on a 28-lane CPU anyway.

 

So for @F Bomb29 and @8-Bit Ninja, there's no worry of a bottleneck anywhere in this scenario. Even if this were a Z170 board or something where a 6700K only has 16 lanes and the chipset has 20, there wouldn't be any bottlenecking going on.

 

Now, if you wanted to add multiple NVMe SSD's or another GPU to the mix (for reasons), then you start bumping up against the limit. But keeping in mind that GPU's don't saturate more than x8 PCI-E 3.0 anyway, you can get another 8 lanes purely by running the first GPU in x8 too, freeing up enough lanes for either 2 more NVMe SSD's or an SSD and a third GPU (for PhysX or coin mining or something?), and that's without touching the PCH's lanes in the X99 scenario.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×