Jump to content

AM5 PCIe bottleneck

Ralf

I'm curious what will happen when I use all m.2, sata and PCIe ports on X670E, how well will the mother and the 4.0x4 downlink cpu handle it?

 

 

Interim 15 T200 OKF("F" intel processors are specifically archituctured for gaming) maybe upgrad to 13'900 | Peeralight cpu fan | Stryx Z690-A Wife(which is branded by ASUS and it's ROG label) | Thermotake 16x 8x2GO SODINM 2400mjz cl22 (2 of them with the mood lighting) | 980 EVO 1TB m.2 ssd card + Kensington 2TB SATA nvme + WD BLACK PRO ULTRA MAX 4TB GAMING DESTROYER HHD | Echa etc 3060 duel fan dissipator 12 GBi and Azrock with the radian 550 XT Tiachi | NEXT H510 Vit Klar Svart | Seasonice 600watts voeding(rated for 100.000 hours, running since 2010, ballpark estimate 8 hours a day which should make it good for 34 years) | Nocturna case fans | 0LED Duel moniter

 

New build in progress: Ryen™ 8 7700x3D with a copper pipe fan | Z60e-A | Kingstron RENEGATE 16x2 Go hyenix | Phantek 2 the thar mesh in front | lead lex black label psu + AsiaHorse białe/białe | 1080 Pro 8TB 15800MB/S NvMe(for gaming this increase fps and charging time, cooled by a M.2 slot with coolblock and additional thermopad) and faster 4000GB HHD | MAI GeForce GTX 2070 Ti and RTX 6800 | Corshair psu

Link to comment
Share on other sites

Link to post
Share on other sites

Unless you're using most of your drives connected through the chipset at the same time, you likely won't run into bandwidth issues. Especially with SATA drives, as they max out at 600MB/s, which is not even close to the max ~7GB/s of the link through the chipset. And even if you are using many PCIe drives and devices simultaneously, unless you are transferring large files from drive to drive, the drive's controller is going to be a bottleneck before the PCIe connection to the CPU is.

 

So while in theory it could be a problem, in practice, for the vast majority of use cases, maxing out your PCIe and IO connections won't cause noticeable problems.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Ralf said:

I'm curious what will happen when I use all m.2, sata and PCIe ports on X670E, how well will the mother and the 4.0x4 downlink cpu handle it?

 

 

 

As long as you're not doing large file transfers to them simultaneously, you probably wouldn't notice. A PCIe 5.0/4.0 M.2 drive running at 4x can likely still run at 1x without the user noticing unless you're in that scenario in my opinion.

 

It's useful to look at the motherboard's block diagram in the manual before purchase to make sure there's no conflicts, especially if you're intending on saturating the board's slots. Some SATA ports on some boards being shared with M.2 slots or PCIe slots, as an example.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, tkitch said:

PCIE is faster than god, you don't need to worry about running out of speed.  

 

The word bottleneck needs to be banned.  >.<

But how will the 4.0x4 chipset link handle like 2-3 4.0x4 m.2 drives, wont even one max out the bandwidth?

26 minutes ago, Agall said:

 

As long as you're not doing large file transfers to them simultaneously, you probably wouldn't notice. A PCIe 5.0/4.0 M.2 drive running at 4x can likely still run at 1x without the user noticing unless you're in that scenario in my opinion.

 

It's useful to look at the motherboard's block diagram in the manual before purchase to make sure there's no conflicts, especially if you're intending on saturating the board's slots. Some SATA ports on some boards being shared with M.2 slots or PCIe slots, as an example.

My main concern is that the two chipsets are daisy chained and the Lan is one the last chipset, usb on the 1st, so if I start backup process from the main cpu drive to all other drives, won't even one Gen4 drive max out the bandwidth?

Interim 15 T200 OKF("F" intel processors are specifically archituctured for gaming) maybe upgrad to 13'900 | Peeralight cpu fan | Stryx Z690-A Wife(which is branded by ASUS and it's ROG label) | Thermotake 16x 8x2GO SODINM 2400mjz cl22 (2 of them with the mood lighting) | 980 EVO 1TB m.2 ssd card + Kensington 2TB SATA nvme + WD BLACK PRO ULTRA MAX 4TB GAMING DESTROYER HHD | Echa etc 3060 duel fan dissipator 12 GBi and Azrock with the radian 550 XT Tiachi | NEXT H510 Vit Klar Svart | Seasonice 600watts voeding(rated for 100.000 hours, running since 2010, ballpark estimate 8 hours a day which should make it good for 34 years) | Nocturna case fans | 0LED Duel moniter

 

New build in progress: Ryen™ 8 7700x3D with a copper pipe fan | Z60e-A | Kingstron RENEGATE 16x2 Go hyenix | Phantek 2 the thar mesh in front | lead lex black label psu + AsiaHorse białe/białe | 1080 Pro 8TB 15800MB/S NvMe(for gaming this increase fps and charging time, cooled by a M.2 slot with coolblock and additional thermopad) and faster 4000GB HHD | MAI GeForce GTX 2070 Ti and RTX 6800 | Corshair psu

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Ralf said:

But how will the 4.0x4 chipset link handle like 2-3 4.0x4 m.2 drives, wont even one max out the bandwidth?

 

nope.  cuz at no point in normal use do you actually use that much bandwidth

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Ralf said:

But how will the 4.0x4 chipset link handle like 2-3 4.0x4 m.2 drives, wont even one max out the bandwidth?

My main concern is that the two chipsets are daisy chained and the Lan is one the last chipset, usb on the 1st, so if I start backup process from the main cpu drive to all other drives, won't even one Gen4 drive max out the bandwidth?

Are you going to be using multiple drives for large file transfers simultaneously? And by "large file transfers" I mean that the files themselves are large - over 1GB each - not that the amount of files being transferred is large.

 

The reason this matters is that the max sequential speeds for NVMe drives are only ever reached in a real world use case in that specific scenario. Transferring a game or a code project or a Blender project or just about anything that most people actually do on a regular basis will involve transferring many small files. These small files will be bottlenecked by the controller on the NVMe drive well before they are bottlenecked by the PCIe link. This is why real world transfer rates are always a fraction of what it says on the box - because in the real world, you rarely transfer single, large files. You usually transfer a mix of small, medium, and large files in bulk, and the small ones bottleneck the situation because of filesystem and drive controller overhead.

 

A system backup is a classic example of this sort of file transfer. Most of your files are going to be small, so the PCIe link isn't a concern when it comes to speed, and the traffic over PCIe will be minimal.

 

Unless your regular backups involve nothing but large video files or zip archives or something like that, the PCIe bandwidth is not a big concern.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Ralf said:

But how will the 4.0x4 chipset link handle like 2-3 4.0x4 m.2 drives, wont even one max out the bandwidth?

My main concern is that the two chipsets are daisy chained and the Lan is one the last chipset, usb on the 1st, so if I start backup process from the main cpu drive to all other drives, won't even one Gen4 drive max out the bandwidth?

For most scenarios, PCIe bandwidth is overspecced for its requirements. Its why an RTX 4090 running PCIe 1.1 x16 can still out perform any other card at 4K. The less raw throughput to the CPU required, the less it matters. In comparison to drives, unless you're constantly read/writing large files to saturate the bus with sequential read/writes, then you're likely only using a small fraction of the bus. In general it's what the South Bridge has always provided. In that scenario, you're better off with an HBA or PCIe SATA controller operating off the North Bridge PCIe connectivity.

 

Really the north bridge is supposed to handle the most important PCIe devices, even back when it was on the motherboard and not on the CPU substrate (Ryzen's I/O die or Intel's 'uncore'). The south bridge being for secondary PCIe devices. Even Intel's 4.0 8x south bridge (chipset) interlink unlikely to be a problem for the same reasons.

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, YoungBlade said:

Are you going to be using multiple drives for large file transfers simultaneously? And by "large file transfers" I mean that the files themselves are large - over 1GB each - not that the amount of files being transferred is large.

By large I mean zip/video files ranging from a few GB to 150GB.

Interim 15 T200 OKF("F" intel processors are specifically archituctured for gaming) maybe upgrad to 13'900 | Peeralight cpu fan | Stryx Z690-A Wife(which is branded by ASUS and it's ROG label) | Thermotake 16x 8x2GO SODINM 2400mjz cl22 (2 of them with the mood lighting) | 980 EVO 1TB m.2 ssd card + Kensington 2TB SATA nvme + WD BLACK PRO ULTRA MAX 4TB GAMING DESTROYER HHD | Echa etc 3060 duel fan dissipator 12 GBi and Azrock with the radian 550 XT Tiachi | NEXT H510 Vit Klar Svart | Seasonice 600watts voeding(rated for 100.000 hours, running since 2010, ballpark estimate 8 hours a day which should make it good for 34 years) | Nocturna case fans | 0LED Duel moniter

 

New build in progress: Ryen™ 8 7700x3D with a copper pipe fan | Z60e-A | Kingstron RENEGATE 16x2 Go hyenix | Phantek 2 the thar mesh in front | lead lex black label psu + AsiaHorse białe/białe | 1080 Pro 8TB 15800MB/S NvMe(for gaming this increase fps and charging time, cooled by a M.2 slot with coolblock and additional thermopad) and faster 4000GB HHD | MAI GeForce GTX 2070 Ti and RTX 6800 | Corshair psu

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Ralf said:

By large I mean zip/video files ranging from a few GB to 150GB.

In that case, if that's all you're transferring, then yes, you may see the chipset be a bottleneck if transferring across multiple drives simultaneously.

 

That said, even a x2 connection offers 3.5GB/s of bandwidth at PCIe 4.0 speeds. So even if your bandwidth is halved, your 150GB file will transfer over in 43 seconds instead of 22 seconds. But note that this loss in bandwidth will only happen when you're doing multiple simultaneous transfers, so whenever that's the only file you're moving, you'll still get the full speed. That's not exactly a devastating loss in performance when it comes to its impact on your real life.

 

Now, I understand that time is money, but if 20 seconds of waiting for a file transfer is going to actually harm your livelihood, then why are you even considering a consumer platform in the first place? Go for HEDT or server hardware with dozens of PCIe lanes all connected to the CPU and this concern will be gone. Yes, it costs thousands of dollars more, but if you're actually at the point where 20 seconds of time is worth that much to you, it's time to make the leap from consumer to professional hardware.

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, YoungBlade said:

Now, I understand that time is money, but if 20 seconds of waiting for a file transfer is going to actually harm your livelihood, then why are you even considering a consumer platform in the first place? Go for HEDT or server hardware with dozens of PCIe lanes all connected to the CPU and this concern will be gone. Yes, it costs thousands of dollars more, but if you're actually at the point where 20 seconds of time is worth that much to you, it's time to make the leap from consumer to professional hardware.

I'm more worried about other stuff not working like in the OP where the bottom x1 slot did not work or there is this:

 

Interim 15 T200 OKF("F" intel processors are specifically archituctured for gaming) maybe upgrad to 13'900 | Peeralight cpu fan | Stryx Z690-A Wife(which is branded by ASUS and it's ROG label) | Thermotake 16x 8x2GO SODINM 2400mjz cl22 (2 of them with the mood lighting) | 980 EVO 1TB m.2 ssd card + Kensington 2TB SATA nvme + WD BLACK PRO ULTRA MAX 4TB GAMING DESTROYER HHD | Echa etc 3060 duel fan dissipator 12 GBi and Azrock with the radian 550 XT Tiachi | NEXT H510 Vit Klar Svart | Seasonice 600watts voeding(rated for 100.000 hours, running since 2010, ballpark estimate 8 hours a day which should make it good for 34 years) | Nocturna case fans | 0LED Duel moniter

 

New build in progress: Ryen™ 8 7700x3D with a copper pipe fan | Z60e-A | Kingstron RENEGATE 16x2 Go hyenix | Phantek 2 the thar mesh in front | lead lex black label psu + AsiaHorse białe/białe | 1080 Pro 8TB 15800MB/S NvMe(for gaming this increase fps and charging time, cooled by a M.2 slot with coolblock and additional thermopad) and faster 4000GB HHD | MAI GeForce GTX 2070 Ti and RTX 6800 | Corshair psu

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Ralf said:

I'm more worried about other stuff not working like in the OP where the bottom x1 slot did not work or there is this:

 

Those are problems with those boards, but they aren't an inherent problem with using up PCIe lanes. It may be that those boards are getting overwhelmed by using too many PCIe devices, but that is on the motherboard manufacturer to correctly validate that their board will work as advertised and as stated in the manual. It has nothing to do with the PCIe bottleneck at the chipset link.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Ralf said:

By large I mean zip/video files ranging from a few GB to 150GB.

constantly or occasionallyu?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×