Jump to content

question about pcie

im a pc nerd but i barely know how pcie lanes work

 

whats the difference between pcie x16 and x8? how about x4 and x2? why are there some pcie slots that are so short and some are long?

 

i want to know the whole pcie lore

 

also, an example, the x570 chipset can support many pcie lanes, however the b550 can only have on pcie for the gpu and one for the m.2

why are there b550s with multiple pcie slots and multiple m.2s then?

rog strix b550 for example

Link to comment
Share on other sites

Link to post
Share on other sites

There is a limited number of lanes depending on the chipset and they can be split in different ways. Lanes get used up by SATA ports, USB ports, and other peripherals, that's why you see such wide variation.

 

In my opinion, things have taken a step back during the past decade. It used to be you'd get multiple x16 slots, and now you get just one. And SLI is gone completely. A lot of manufacturers are reducing their USB ports too.

Link to comment
Share on other sites

Link to post
Share on other sites

Think of water pipes with information as water and PCIe lanes as the pipe. A 16 inch pipe is going to allow more water flow than an 8 inch pipe right? and obviously the 16 inch will be physically bigger. However some times they will have x2 connections on x16 slots or cut one end open for cards with x16 slots to fit in, but their max connection speed is limited by the connection.

 

For B550, think of having only a single 16 inch pipe for inlet but many 4/8/16 inch pipes as outlets with control valves, it allows each outlet to get the most water output they are rated for but wouldn't if they are asking for more combined than the inlet could provide.

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

Think of it like data bandwidth. The more lanes, the more bandwidth, the more data that can be transferred in parallel at a given time.


They scale linearly, x16 is full bandwidth, x8 is half of that, x4 is a quarter bandwidth, x2 is an eighth.

You rarely see x2, since there’s usually x1 connections for some devices and it keeps the numbers whole without wasting a lane, as the cpu can only provide so many lanes.


Both the cpu and the chipset have pcie lanes they provide, let’s make an example. A fictional cpu and chipset, the cpu provides 20 lanes, and the chipset provides 16 lanes. Those two items communicate to eachother with the DMI, they don’t use lanes between eachother, but can communicate connections through each others lanes using the DMI as a bridge.

So your cpu provides 20 lanes, we’ll use 16 for an x16 slot with a gpu. There are 4 left. All 4 go to an nvme ssd in the closest nvme slot to the cpu.

Now you’re just left with the chipset lanes, 16 more. 1 goes to a usb controller, one goes to audio, one goes to front panel usb, one goes to networking. 12 left from the chipset, zero from the cpu.

Now you plug in two more nvme ssds in the lower slots, you use 4 and 4 lanes from the chipset. You now have 4 left for sata controllers, or a spare pcie slot or two.

 

The lanes can also be configured to be split. So say you use two graphics cards, but your cpu only has 20 lanes. It will bifurcate the 16 lanes for the top slot into 8 lanes, and 8 for the next primary slot. Meaning you still use all 20 lanes, 16 of which go to the video cards, but now split as 8x8.

 

The physical slot is different than the actual connection. You’ll see pcie slot wired electrically as lower lane counts than their physical dimension. Like a bottom x16 slot is usually wired as x4 to the chipset, not the cpu. It’s for wider compatibility, with cards that physically need the x16 connection size, but not the bandwidth.

This is more common now with pcie standards being much faster than the cards that use them, they don’t always need all 16 lanes to operate at their best speed.

Reference this table:

IMG_2560.webp.c17504bff9f8b84b23e7b927497e0d9e.webp

What this shows is for example, pcie 5.0 is 4gbps of data transfer per lane. 4x16 lanes, 64gbps of of data transfer.

That means a single lane of pcie 5.0 is as fast as the entire pcie 1.0 x16 slot using all 16 lanes. Two pcie 5.0 lanes is as fast as 2.0 x16, etc. It scales linearly.

 

This is most common with gpus, where their actual bandwidth throughout doesn’t require all 16 lanes except for some of the highest end cards. Especially not with modern pcie speeds. Something like an RTX 3060 will perform exactly the same on pcie 3.0 x16 as it would on 4.0 x8. It doesn’t need the bandwidth. 
This is even more dramatic with older and lower tier cards. Something that came out on pcie 2.0 wouldn’t even need more than one lane of 5.0.

The limitation then becomes the cards interface controller, cutting any card down to x1 will stunt its performance even if on paper the bandwidth is fast enough. Like take a 6800 gt from 2005 and put it on a modern motherboard with pcie 5.0, even if it doesn’t even need more than one lane, putting it on one lane and it’ll struggle since the card can’t feed enough data through that one lane, even if the cpu is more than capable of doing that.

 

But for the less extremes it’s fairly common. A lot of lower end and even some mid tier new cards come out as x4 and x8 since they don’t need the bandwidth, though they still use x16 slots physically since that’s the accepted physical standard for video cards.

Link to comment
Share on other sites

Link to post
Share on other sites

Lanes are just "cords and cables," essentially pathways so data can travel from A to B.  Each generation, 1.0 to 5., have different speed speed, which increasing per generation. so basically, the more lanes and higher gen, the faster data can be send. So what limits motherboard makers from adding more lanes? Several, the first being market segmentation and marketing. Simply put, they just have to make a board with lower end specs so that the higher end would be appealing. But why the high end still have a limit? Well that's because it is also tied to chipset design and architecture limitations. So they most work within the confines of the CPU/GPU' pcie allocations. Then you add size and layout on the pcb and lastly of course, is cost, adding more lanes means more controllers and switches. How much it would cost per motherboard if they add more separates lanes, switches and controllers, I don't know. But's one of the reason why you see in most motherboards, when you use all the NVME slot, either a pcie x4 or x1 or sata is disabled, since they are sharing lanes. Hope this answers some of your questions.

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, aren332 said:

whats the difference between pcie x16 and x8? how about x4 and x2? why are there some pcie slots that are so short and some are long?

The difference between all those slots is how many pins, and thus how many lanes there are, and this is also the reason why some slots are shorter and longer than others.

 

So a PCIe x16 slot is the longest, also housing the highest number of pins. PCIe x1 is the shortest, also housing the lowest number of pins. However, you can have slots that are physically longer than the slot type they're wired for. For example, a PCIe x16 slot being wired for PCIe x4, or a PCIe x16 slot being wired for PCIe x8, the former being more common.

 

This is separate from PCIe generations, which have doubled bandwidth for every generational increase, using various technologies to support that increase.

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

Another thing I hate is that they keep increasing the bandwidth to hard drives needlessly. NVME is 4x lanes. 1x or 2x would be fine and those lanes could be used for additional PCIE slots or USB ports. There are motherboards out there with multiple 4x NVME slots, and the second PCIE slot is only 1x which makes it useless for almost anything. They could easily have a single 4x NVME and the rest being 1x and instead have a 4X PCIE.

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, daygeckoart said:

Another thing I hate is that they keep increasing the bandwidth to hard drives needlessly. NVME is 4x lanes. 1x or 2x would be fine and those lanes could be used for additional PCIE slots or USB ports. There are motherboards out there with multiple 4x NVME slots, and the second PCIE slot is only 1x which makes it useless for almost anything. They could easily have a single 4x NVME and the rest being 1x and instead have a 4X PCIE.

NVME slots can be adapted to standard pcie easy enough. I wish they would give us more lanes on consumer platforms. The lane count hasn't kept up with current system needs. I dont care if they are slots on the board or nvme or oculink or other format, just give me more than 20+4 from the CPU without needing to go to threadripper/xeon/epyc.

Link to comment
Share on other sites

Link to post
Share on other sites

To add more pci-e lanes from cpu you need to add an extra pci-e controller in the cpu die that has the pci-e controller and other peripheral bits (usb, sata, other stuff) which would mean more expensive silicon die, more pins in the cpu, bigger cpu socket and more difficult routing of traces - so AM4 and AM5 is a  trade-off between more expensive processors and more expensive motherboards (because more expensive cpu socket, more traces to route to slots so potentially more expensive circuit board because of more layers needed to put the traces on) 

They got AM4 and AM5 cheaper by reusing the silicon die with the pci-e and sata and usb on all cpus, and by making this die at 12-14nm while the main cpu is on 7nm and lower ...

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mariushm said:

To add more pci-e lanes from cpu you need to add an extra pci-e controller in the cpu die that has the pci-e controller and other peripheral bits (usb, sata, other stuff) which would mean more expensive silicon die, more pins in the cpu, bigger cpu socket and more difficult routing of traces - so AM4 and AM5 is a  trade-off between more expensive processors and more expensive motherboards (because more expensive cpu socket, more traces to route to slots so potentially more expensive circuit board because of more layers needed to put the traces on) 

They got AM4 and AM5 cheaper by reusing the silicon die with the pci-e and sata and usb on all cpus, and by making this die at 12-14nm while the main cpu is on 7nm and lower ...

Which they could absolutely do, this is just way to segment the market. 20 pcie lanes go back as far as when the northbridge was integrated into the CPU, but then we were using SATA/PATA drives which needed much less IO (a single PCIE lane would be enough), now storage is heavily PCIE based. and you either have the choice of running a bunch through the chipset or go to a workstation/server platform.

 

I'm just salty. I want one PC to do everything. I have no need for a NAS and would love to have all of my storage be flash based, but with SATA ssd's getting harder and harder to buy, that leaves little option. I know that I am a minority.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×