Jump to content

Unannounced AMD 400-Series Chipset Appears In PCI-SIG Integrators List

It's me!
Just now, NumLock21 said:

Well Ryzen user are not able to have 2 NVMe SSD both running at Gen 3 x4.

I've never seen anyone do that personally - and never considered it myself - so no big loss as far as I'm concerned.  To address the issue however, wouldn't they still be capped by the 4 lane connection between the chipset and the CPU?  Unless the chipset can handle all the communication with them.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Jito463 said:

I've never seen anyone do that personally - and never considered it myself - so no big loss as far as I'm concerned.  To address the issue however, wouldn't they still be capped by the 4 lane connection between the chipset and the CPU?  Unless the chipset can handle all the communication with them.

I'm sure someone out there with a Ryzen build is bummed out they can't run 2 NVME drives at full speed.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Drak3 said:

Neither are users of Zx70 platforms. Two+ drives share a 3.0 x4 uplink. Only X99/299/399 and the server platforms is this not the case.

 

Unless one if to use a riser card, in which it's a moot point on either platform.

So what you're saying is, when running 2 NVMe SSDs, they run at Gen 3 x2 each?

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, NumLock21 said:

Not being able to install additional PCIe devices that's has the ability to run at Gen 3. I have a X99 with 40 gen 3 lanes, so I don't really care about my chipset that only has 8 Gen 2 lanes. Those Intel cpus with 28 lanes, with SLI using up 16 lanes, there is still 12 left, which is plenty to go around for other Gen 3 devices. Intel Z170 to Z370 Gen 3 lanes, has a combined total of 36 to 40.

But you do realize that Gen 3 devices will very happily run in a Gen 2 slot and we're talking about devices that don't need the bandwidth that Gen 2 and Gen 3 can actually deliver. If there were an actual impact or negative effect I'd be also a bit disappointed but since there is no measurable difference to the devices that you put in those slots or chipsets that come off the main chipset I can't think of single reason why this is actually a bad thing other than the number saying 2 and not 3 in the spec sheet.

 

Ryzen is doing it right, pushing device connectivity over the CPU lanes rather than relying on a chipset and a single bandwidth bottleneck between that chipset and the CPU. Intel's desktop chipset design is fundamentally flawed for this reason making all those lanes not fit for high performance low latency devices which is 90% of actual PCIe devices that you would be putting in a common gaming computer (GPUs and NVMe). The chipset should only be used for Audio, LAN/WLAN, SATA and other stuff like that which is or should be fully integrated in to the motherboard or the chipset itself and not an add-in card.

 

 

4 hours ago, Drak3 said:

The Chipset's 20 lanes are just 4 PCIe 3.0 lanes being split up. No different from a PLX bridge, because the southbridge is essentially a PLX bridge.

 

4 hours ago, NumLock21 said:

CPU has 16, chipset has 20. DMI 3.0 does not use up any of the 20 lanes from the chipset. Devices controlled by the chipset gets their 20 lanes from the chipset.

 

As@Drak3 said, those 20 lanes or any number for that matter only have the bandwidth of the DMI interface. You cannot create bandwidth from nothing so any lanes off the chipset in total are restricted to the DMI bandwidth which is not great. You create/split more lanes out on the chipset for flexibility and ability to connect more devices, lanes can't be directly shared between devices so this is also why Ryzen uses 8 2.0 lanes not 4 3.0 lanes in the first generation chipset.

 

 

We shouldn't care about the chipset at all, nobody should want any PCIe devices to be connecting through that that you add yourself to the PC. CPU lanes are the best and most useful, ask for those. This is after all one of the reasons why you went with X99 and not a Z series chipset.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, NumLock21 said:

I'm sure someone out there with a Ryzen build is bummed out they can't run 2 NVME drives at full speed.

You can, most X370 boards have two M.2 slots with full PCIe 3.0 x4 connectivity, you just have to be careful which PCIe slots you populate. Not really an issue since most people run single GPU and other than M.2 is the only PCIe device they will ever plug in.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Drak3 said:

By default, routed, in way of AMD chipset, to a SATA controller. Mainboard manufacturers can reclaim those lanes.

 

Which isn't surprising, seeing as how PCIe reliant EPYC is, seeing as Ryzen is just a single die counterpart.

 

Identical functionality. If it walks like a duck, and quacks like a duck, it's certainly not great aunt Jamima.

 

Native lanes are those coming directly from the northbridge, which is integrated on any desktop Intel chip with PCIe 3.0.

Intel's chipset lanes are equivalent to running a PLX bridge, because that is the exact setup Intel uses to get those 20 "PCIe 3.0" lanes.

Not identical. You can't run XFire or SLI off of chipset lanes.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bit_Guardian said:

Not identical. You can't run XFire or SLI off of chipset lanes.

Yes, identical. You can't SLI on chipset lanes because bandwidth is reported as 3.0 x4 max.

Crossfire can be done over those lanes.

 

Identical functionality.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, leadeater said:

But you do realize that Gen 3 devices will very happily run in a Gen 2

Yes, they will be happy to run in Gen 2, just that it kind of sucks for Ryzen users, having to wait this long, when everything is getting Gen 3, they're still getting the Gen 2 treatment. And I found some interesting article showing Lenovo system actually capping their DMI 3.0 to Gen 2. The author of the article was able to enable the DMI back to full speed, trough some bios modding. This was tested on a Samsung 950 Pro 256GB NVMe SSD

DMI 3.0 capped at Gen 2

58adfa5518c12.jpg

 

After bios modding, DMI 3.0 running at full speed

58ade0a43f6a8.jpg

 

11 hours ago, leadeater said:

so this is also why Ryzen uses 8 2.0 lanes not 4 3.0 lanes

You're talking about the lanes coming out of the chipset or the communication between cpu and chipset?

Ryzen cpu<==PCIe 3.0 x4==>X370==>8 PCIe 2.0 lanes

Intel cpu<==PCIe 2.0 x4 (DMI 2.0)==>X99==>8 PCIe 2.0 lanes

Intel cpu<==PCIe 3.0 x4 (DMI 3.0)==>Z270==>24 PCIe 3.0 lanes

 

 

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, NumLock21 said:

DMI 3.0 capped at Gen 2

(SNIPPED IMAGE)

 

After bios modding, DMI 3.0 running at full speed

(SNIPPED IMAGE)

Notice that while the sequential read speeds increased, the write speeds and random (4k) read speeds decreased?  That's not an improvement, in my opinion, and is probably the reason Lenovo made that decision.

 

1 hour ago, NumLock21 said:

You're talking about the lanes coming out of the chipset or the communication between cpu and chipset?

Ryzen cpu<==PCIe 3.0 x4==>X370==>8 PCIe 2.0 lanes

Intel cpu<==PCIe 2.0 x4 (DMI 2.0)==>X99==>8 PCIe 2.0 lanes

Intel cpu<==PCIe 3.0 x4 (DMI 3.0)==>Z270==>24 PCIe 3.0 lanes

Except you don't actually get a full 24 PCIe 3.0 lanes, not once you hit the chipset->CPU barrier.  Those "24 lanes" share a x4 (by 4) connection to the CPU.  It's fine if only one device (or two) is communicating with the CPU, but if you have several devices at once vying for that bandwidth, then the speed is going to drop significantly.

 

Perhaps a diagram of the Ryzen CPU bus will help (found it on Gamer's Nexus).

 

AM4-block-diagram-gn_1.png

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Jito463 said:

Except you don't actually get a full 24 PCIe 3.0 lanes, not once you hit the chipset->CPU barrier.  Those "24 lanes" share a x4 (by 4) connection to the CPU.  It's fine if only one device (or two) is communicating with the CPU, but if you have several devices at once vying for that bandwidth, then the speed is going to drop significantly.

I understand that no matter how many lanes the chipset has be it 20, 24, 50, or over 9000, it's still limited by that puny 4 lane wide bridge connecting between the cpu and chipset. Ignoring that 4 lane wide bridge, wouldn't more lanes be better?

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, NumLock21 said:

I understand that no matter how many lanes the chipset has be it 20, 24, 50, or over 9000, it's still limited by that puny 4 lane wide bridge connecting between the cpu and chipset. Ignoring that 4 lane wide bridge, wouldn't more lanes be better?

Except if those lanes are going to throttle, what's the point of them?  Besides, if you look at the diagram I posted, those lanes from the chipset aren't used for anything critical.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, NumLock21 said:

I understand that no matter how many lanes the chipset has be it 20, 24, 50, or over 9000, it's still limited by that puny 4 lane wide bridge connecting between the cpu and chipset. Ignoring that 4 lane wide bridge, wouldn't more lanes be better?

Not when your example you are using is an NVMe SSDs because you can only use 1 off the chipset, any more than that and you will not be at full speed.

 

1 NVMe = 100% speed

2 NVMe = 50% each

3 NVMe = 33% each

4 NVMe = 25% each

 

So why the heck would you buy more than 1 NVMe to connect to the chipset? Stick with one and get a bigger one.

 

If your point was it's disappointing for Ryzen users that they can't get full speed from multiple NVMe devices off the chipset then neither can Intel so it's meaningless that an Intel chipset has Gen 3 lanes because at this point it's just a number. Ryzen puts it's NVMe devices off the CPU lanes, most people will only use one anyway so for the most part neither AMD users or Intel users are disadvantaged by their respective platforms but I do consider AMD's implementation better as you get lower latency from the CPU meaning better small block I/O performance which is the day to day task stuff.

 

Also if you really want to run multiple NVMe devices but want to stick to a lower budget the 1900X is a much cheaper option than Intel HEDT, also most X299 motherboards only have 2 M.2 slots anyway so you either have to use 2.5" U.2 if there is one or use an add-in card NVMe. Threadripper on the other hand almost always have 3 M.2 and a U.2 and you don't have to worry about which CPU you have and which slots you can and can't populate due to artificial PCIe lane limitation.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

snip

If a NVMe SSD on the chipset was limited to just x1 speed, then I would like that drive to run at PCIe 3.0 x1, rather than PCIe 2.0 x1.

PCIe 2.0 x1 = 500Mb/s

PCIe 3.0 x1 = 984Mb/s

Too bad that Ryzen chipset runs at Gen 2 instead of Gen 3. Even when it's bottleneck to the extreme, at least Gen 3 will give better performance over Gen 2.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, NumLock21 said:

If a NVMe SSD on the chipset was limited to just x1 speed, then I would like that drive to run at PCIe 3.0 x1, rather than PCIe 2.0 x1.

PCIe 2.0 x1 = 500Mb/s

PCIe 3.0 x1 = 984Mb/s

Too bad that Ryzen chipset runs at Gen 2 instead of Gen 3. Even when it's bottleneck to the extreme, at least Gen 3 will give better performance over Gen 2.

Thing is that's not how it works. The lanes themselves don't get any slower or reduce the number to the device, they will all report the full lane count and speed but when you actually go to use them the performance will be limited.

 

It's no different to a 24 port switch with a single uplink to the rest of the network. Every device connected to the switch has 1Gbps connection speed but there is only 1Gbps bandwidth to the rest of the network. Difference here is that unlike with a network switch all traffic must go up the congested link and there is no direct talking between devices on the chipset.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, NumLock21 said:

If a NVMe SSD on the chipset was limited to just x1 speed, then I would like that drive to run at PCIe 3.0 x1, rather than PCIe 2.0 x1.

PCIe 2.0 x1 = 500Mb/s

PCIe 3.0 x1 = 984Mb/s

Too bad that Ryzen chipset runs at Gen 2 instead of Gen 3. Even when it's bottleneck to the extreme, at least Gen 3 will give better performance over Gen 2.

If we assume the bandwidth from CPU to PCH is the same (x4 3.0) for both you're saying you'd rather have 20 gimped 3.0 lanes than 8 full 2.0 lanes even if the 2.0 lanes end up being faster overall? That's strange. Because you seem to have understood they're both limited by bandwidth but you insist you'd rather have it split 5 ways and be called 3.0 without being able to achieve full bandwidth than it only being split two ways and be called 2.0.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, NumLock21 said:

If a NVMe SSD on the chipset was limited to just x1 speed, then I would like that drive to run at PCIe 3.0 x1, rather than PCIe 2.0 x1.

Did you even look at the diagram I posted?  The NVME drives are direct from the CPU, NOT from the chipset.  It's only the chipset lanes that are PCIe 2.0.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Jito463 said:

Did you even look at the diagram I posted?  The NVME drives are direct from the CPU, NOT from the chipset.  It's only the chipset lanes that are PCIe 2.0.

Some have the m.2 on the chipset and yes i do see that. 

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, leadeater said:

Thing is that's not how it works. The lanes themselves don't get any slower or reduce the number to the device, they will all report the full lane count and speed but when you actually go to use them the performance will be limited.

 

It's no different to a 24 port switch with a single uplink to the rest of the network. Every device connected to the switch has 1Gbps connection speed but there is only 1Gbps bandwidth to the rest of the network. Difference here is that unlike with a network switch all traffic must go up the congested link and there is no direct talking between devices on the chipset.

Are you sure about that, cause here are the specs to a Asrock B250M Pro 4 board.

Quote

 

- 1 x Ultra M.2 Socket (M2_1), supports M Key type 2230/2242/2260/2280 M.2 SATA3 6.0 Gb/s module and M.2 PCI Express module up to Gen3 x4 (32 Gb/s)**
- 1 x Ultra M.2 Socket (M2_2), supports M Key type 2230/2242/2260/2280 M.2 PCI Express module up to Gen3 x4 (32 Gb/s)**
 

*If M2_1 is occupied by a SATA-type M.2 device, SATA3_0 will be disabled.

**If PCIE2 slot or PCI slot is occupied, the PCIe-type M.2 device on M2_1 socket will run at Gen3 x2 (16 Gb/s)

 

And yes, for a network switch, all devices connect to it, will get their gigabit connection.

 

 

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, NumLock21 said:

And yes, for a network switch, all devices connect to it, will get their gigabit connection.

Yes but as I said if there is only a 1Gbps uplink from the switch to the network any traffic over that link can only be at 1Gbps just like any device connected to the chipset in a PC. What makes it worse is that ALL device traffic on the chipset must go out that link to the CPU for anything that happens.

 

So it doesn't matter if your network connection reports as 1Gbps it doesn't mean you actually have 1Gbps bandwidth available if another device is using the uplink from the switch. It is the same situation for your internet connection, open your network connection status in Windows, it says 1Gbps correct but do you actually have 1Gbps internet?

 

Forget the device connection speed it's not actually relevant to the discussion at hand and why it matters. If you connected a GPU to an Intel chipset at PCIe 3.0 x16 it's still only effective PCIe 3.0 x4 no matter how many times you open GPU-Z and read 3.0 x16 off the link speed detected. Remember this is only a theoretical example as there are no x16 slots off an Intel chipset for this exact reason, pointless.

 

Every NVMe SSD that you connect to an Intel chipset on an M.2 x4 slot will always report all 4 lanes no matter the number connected, what happens is they all share an x4 uplink reducing effective bandwidth. 1 or 10 NVMe SSDs on a chipset is one and the same performance wise, you would only gain capacity.

 

As for the stuff you quoted that's not the same situation, that is PCIe switching for lanes that are shared between slots and is a design choice by motherboard manufactures on how they wish to use the lanes off the chipset, it has nothing to do with what we are talking about.

 

If you can explain to me how a single 1Gbps port on a switch can give 23Gbps of bandwidth or a PCIe 3.0 x4 link can give 24GBps bandwidth then you might have a shot at being able to justify how Intel having PCIe 3.0 is so much better than AMD only having PCIe 2.0.

 

Maybe you just don't understand what we are saying because you are too focused on the link between the device and chipset.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

snip

I understand exactly what you're talking about. Like the example you gave, gpu running off the chipset. Even though the gpu is rated at x16, it's will be limited to just x4 because the chipset's maximum is only x4, but gpu-z will still say the gpu is at x16.

Sucks that my laptop's MX150 is running at x4. :\

x4.PNG.ce4bebeeab6b821dfa73a85f8d7430b8.PNG

 

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, NumLock21 said:

I understand exactly what you're talking about. Like the example you gave, gpu running off the chipset. Even though the gpu is rated at x16, it's will be limited to just x4 because the chipset's maximum is only x4, but gpu-z will still say the gpu is at x16.

Sucks that my laptop's MX150 is running at x4. :\

Anyway yea you're right it's better to have PCIe 3.0 lanes since when you need to switch the lanes based on slot population like that B250 example you gave as it means you need less lanes to a device to sustain that point to point link speed which is a good thing. The only difference is that really only matters on Intel desktop platform as they more heavily utilize chipset lanes and lane switching. AMD utilizes the CPU lanes as much as they can and they switch those, so you can have an NVMe off the CPU and one off the chipset and receive the actual full performance of both of them. Not sure if there is a MB with that configuration but it's something that could be done.

 

You also have to remember all M.2 slots as far as I know are always off the chipset (PCH) because Intel desktop CPUs only have 16 lanes and those are connected via switching to two physical x16 slots where the first is electrical x16 until the second is populated then both are electrical x8. Unless there is motherboards that forgo that second slot all together you can't have NVMe SSDs off the CPU (in doing so you will take lanes away from the GPU) for Intel unless you step over to HEDT.

 

AMD sacrificed the ability to have as many devices in a system as Intel desktop can but they catered very carefully to the most common configurations used so when you do side by side comparisons between platforms both are equally capable in regard to device connectivity. I haven't really seen any motherboards from either of them that has more slots or capabilities than the other.

 

The best motherboard for the conversation I have seen is the ASUS CROSSHAIR VI EXTREME because you can have two NVMe SSDs at full PCIe 3.0 x4 speed directly off the CPU and still have the GPU operating at x16. Now the downside, but not really, is that the second M.2 slot is shared between the second PCIe slot so if you want dual GPU you can't also have dual NVMe.

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, leadeater said:
Spoiler

Anyway yea you're right it's better to have PCIe 3.0 lanes since when you need to switch the lanes based on slot population like that B250 example you gave as it means you need less lanes to a device to sustain that point to point link speed which is a good thing. The only difference is that really only matters on Intel desktop platform as they more heavily utilize chipset lanes and lane switching. AMD utilizes the CPU lanes as much as they can and they switch those, so you can have an NVMe off the CPU and one off the chipset and receive the actual full performance of both of them. Not sure if there is a MB with that configuration but it's something that could be done.

 

You also have to remember all M.2 slots as far as I know are always off the chipset (PCH) because Intel desktop CPUs only have 16 lanes and those are connected via switching to two physical x16 slots where the first is electrical x16 until the second is populated then both are electrical x8. Unless there is motherboards that forgo that second slot all together you can't have NVMe SSDs off the CPU (but in doing so you will take lanes away from the GPU) for Intel unless you step over to HEDT.

 

AMD sacrificed the ability to have as many devices in a system as Intel desktop can but they catered very carefully to the most common configurations used so when you do side by side comparisons between platforms both are equally capable in regard to device connectivity. I haven't really seen any motherboards from either of them that has more slots or capabilities than the other.

 

The best motherboard for the conversation I have seen is the ASUS CROSSHAIR VI EXTREME because you can have two NVMe SSDs at full PCIe 3.0 x4 speed directly off the CPU and still have the GPU operating at x16. Now the downside, but not really, is that the second M.2 slot is shared between the second PCIe slot so if you want dual GPU you can't also have dual NVMe.

 

I'm glad that Ryzen went with that route where devices like M.2 gets their lanes directly from the cpu, they should have provided more than 24. However I did read somewhere that Ryzen actually had 32, but was limited to 24 because of the chipset. I have no idea where they got that info from. Also Ryzen's cpu lane is wasted to the chipset, which means the user only gets 20. Now if they have made it dedicated ones like Intel with their DMI, Then users will get all of the 24 lanes. Also their Zen based APU code name Raven Ridge only has 8 lanes.

 

amd.JPG.6e31f2150ca18cb9ded42b58cb564a0b.JPG

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, NumLock21 said:

I'm glad that Ryzen went with that route where devices like M.2 gets their lanes directly from the cpu, they should have provided more than 24. However I did read somewhere that Ryzen actually had 32, but was limited to 24 because of the chipset. I have no idea where they got that info from. Also Ryzen's cpu lane is wasted to the chipset, which means the user only gets 20. Now if they have made it dedicated ones like Intel with their DMI, Then users will get all of the 24 lanes. Also their Zen based APU code name Raven Ridge only has 8 lanes.

Yea the die itself has 32, 24 enabled for Ryzen with 4 being used for chipset connectivity. Not sure why Ryzen was limited to 24 though, seems a bit dumb to me. I don't think it really matters that PCIe lanes are being used to connect to the chipset, PCIe lanes or DMI both require die area so just having more PCIe lanes and using those is effectively the same thing anyway. Just pretend there is 4 less :P.

 

4 minutes ago, NumLock21 said:

amd.JPG.6e31f2150ca18cb9ded42b58cb564a0b.JPG

What's the source for this? This doesn't make much sense without the context because EPYC has 128 lanes not 16 and doesn't even have a chipset at all so I'm a little confused as to what it's showing.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Yea the die itself has 32, 24 enabled for Ryzen with 4 being used for chipset connectivity. Not sure why Ryzen was limited to 24 though, seems a bit dumb to me. I don't think it really matters that PCIe lanes are being used to connect to the chipset, PCIe lanes or DMI both require die area so just having more PCIe lanes and using those is effectively the same thing anyway. Just pretend there is 4 less :P.

 

What's the source for this? This doesn't make much sense without the context because EPYC has 128 lanes not 16 and doesn't even have a chipset at all so I'm a little confused as to what it's showing.

Here it is

https://pcisig.com/developers/integrators-list?field_il_comp_product_type_value=All&combine[]=switches_bridges&combine[]=root_complex&combine[]=systems_motherboard&keys=AMD

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, NumLock21 said:

So this only tells us that Amd will still use a x4 connection to the chipset, I just want my 24 lanes (in case they want to keep x4 for the m.2) so that I can get x16/x8 crossfire

edit: i know that 32-4!=30 :/

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×