Jump to content

Blame Intel if you feel eGPU sucks, cuz U series CPU it self is the biggest bottleneck!

Maybe it's just me, but considering this is an ultra-low voltage CPU part meant for lightweight laptops and ultrabooks and the market for those, I don't understand why this is an issue. I get the part that maybe you want more options for upgradeability. But at the same time, I'd argue to buy the computer that is designed for what you want to do rather than make something else not designed for what you want to do to submit.

 

2 hours ago, Defconed said:

12lanes( in fact 16 HSIO lanes) through a 4GB/s bottleneck (shared with USB Ethernet storage and stuff)
I wonder what's in Intel's mind when they designed this

Because they're not expecting the average user to saturate every single I/O port going around the PCH at once. And not every I/O port is going to be interacting with the CPU or memory directly. I would imagine that if you're doing a storage to Ethernet transfer, you're not putting the the data from storage into RAM, then RAM into the Ethernet port. It'd be much more efficient to just have the storage controller transfer data directly to the Ethernet one.

 

But anyway, you don't build highways expecting every car in the city to drive on them all the time.

 

22 minutes ago, porina said:

Quick question, where exactly is the thunderbolt controller connected? To the chipset?

Likely the chipset since Intel hasn't integrated anything extra into the CPU die since the PCH was introduced.

 

EDIT: Then again if this is a laptop without a dGPU, you could wire up the TB controller directly to the CPU's PCIe lanes. But I think through the chipset is a better assumption.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, porina said:

Bleh, finding the various core turbo speeds of mobile processors isn't proving to be easy, and that's without taking into consideration the mobile parts are more likely to go into thermal limiting. My gut feeling is still that CPU would be limiting if you part it with a higher end GPU. If you use a lower end GPU, it would be GPU limiting. The bandwidth is probably not as significant in either case.

1

Well, it's not that bad with new gen CPUs atm ;) Intel moved from mainly dual-cores to quad-cores and AMD offers quad-cores as well. There are even 6-cores available so that's also interesting.

My 4C/8T R5 2500U scores 143cb in the singlethreaded test and 670cb in multithreaded one so that's quite decent and it should be able to handle even higher-end GPUs, and that's a mid-range part as it's a fairly budget-oriented gaming laptop. It has a desktop RX 560 in it. Bandwidth might actually be a limiting factor here, especially if you have a higher-end AMD or Intel chip inside.

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, porina said:

Quick question, where exactly is the thunderbolt controller connected? To the chipset?

It depends
On U series, everything ( DIMM not included, of course) comes out from the PCH(Chipset)
On S series( desktop/HQ), it depends, certain notebooks chose PCH, few chose direct connect to CPU
Problem is, on U series, everything can be grabbing that juicy 4GB/s throughput away from your (external or internal, Integrated doesn't count) GPU messing with bottlenecking even further
As the link shown above, just a PCIe Gen3 x4 doesn't hurt performance that much
So I supposed actual bandwidth is far less than that, given that NVMe storage also needs significant bandwidth

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, M.Yurizaki said:

 

Maybe it's just me, but considering this is an ultra-low voltage CPU part meant for lightweight laptops and ultrabooks and the market for those, I don't understand why this is an issue. I get the part that maybe you want more options for upgradeability. But at the same time, I'd argue to buy the computer that is designed for what you want to do rather than make something else not designed for what you want to do to submit.

 

To be honest, I won't even think about eGPU, I think is both expensive and not effective
I opened this thread more on the side for a technical discussion, given that LTT recently mentioned about thin&lights with eGPU
Someone would eventually choose this as a option, and TB3 it self is advertising eGPU as a selling point
When they're advertising about it, why can't I dig into it, right?
So even it seems a bit skeptical, I suppose discussing why such a performance loss existed will be valuable
Ideology and practical use case aside, I'm just confused about why they're making PCIe coming out solely from PCH
I mean, even desktop Ryzen provided additional 4x PCIe for NVMe storage, why would Intel going the other way?
I just don't see the logic in this, and I'm hoping for a answer/disscussion

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Morgan MLGman said:

Well, it's not that bad with new gen CPUs atm ;) Intel moved from mainly dual-cores to quad-cores and AMD offers quad-cores as well. There are even 6-cores available so that's also interesting.

My 4C/8T R5 2500U scores 143cb in the singlethreaded test and 670cb in multithreaded one so that's quite decent and it should be able to handle even higher-end GPUs, and that's a mid-range part as it's a fairly budget-oriented gaming laptop. It has a desktop RX 560 in it. Bandwidth might actually be a limiting factor here, especially if you have a higher-end AMD or Intel chip inside.

as the Techpowerup post I linked above, PCIe 8X sacrifices almost no performance(1%)
even 4X didn't hurt more than something like 10%
So this is also why I'm posting this thread: I doubt whether the actual bandwidth is even 4x or 2x
Cuz the performance loss on eGPU, compared to the desktop alternative( Recent LTT Video) is almost 50%
Ain't sure how much difference came from the CPU

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Defconed said:

To be honest, I won't even think about eGPU, I think is both expensive and not effective
I opened this thread more on the side for a technical discussion, given that LTT recently mentioned about thin&lights with eGPU
Someone would eventually choose this as a option, and TB3 it self is advertising eGPU as a selling point

The biggest issue I see with going with a U series CPU is it'll just lead to everyone's favorite B-word here: bottleneck (of the CPU kind)

 

Assuming that PCIe 3.0 x4 is available without any other device sharing the bus, i.e., it's connected to the PCH, there wouldn't be noticeable GPU performance degradation from 3.0 x16. What you do end up getting though is a lower performing CPU.

Quote

When they're advertising about it, why can't I dig into it, right?

I don't know any company advertising eGPU options, except Dell for their Alienware laptops and Razer. But they don't use U series CPUs on said advertised laptops either (well, I don't know if Razer advertises eGPU functionality on their 13)

 

Quote

So even it seems a bit skeptical, I suppose discussing why such a performance loss existed will be valuable

Ideology and practical use case aside, I'm just confused about why they're making PCIe coming out solely from PCH
I mean, even desktop Ryzen provided additional 4x PCIe for NVMe storage, why would Intel going the other way?
I just don't see the logic in this, and I'm hoping for a answer/disscussion

Intel's CPUs typically provide 16, 28, or 40 lanes, depending on which one you're getting. So technically all of the PCIe lanes are not coming from the PCH. Just the ones for peripherals. The PCH system also has been around since 2009ish, and unless there's a dire need to update this, I don't think Intel will move away from it.

 

As far as NVMe allocation on Ryzen goes, I'm also skeptical anyway about NVMe's practical performance in consumer markets anyway. NVMe doesn't improve practical performance substantially over SATA based SSDs as SATA based SSDs did over HDDs. On top of that, the worry of NVMe drives sharing DMI bandwidth is only a concern if you're also having every I/O device hammering the PCH and wanting access to the CPU. I mean, the Ryzen setup is nice, but it's only really useful if you're having the NVMe drive hammering data to and from CPU/RAM. I would imagine that between say NVMe and another storage device on Intel's systems, there wouldn't really be a performance loss assuming the PCH can keep up with the bandwidth for both.

 

Not to mention storage performance highly depends on the operation you're doing. Most operations done tend to be small in size.

 

11 minutes ago, Defconed said:

as the Techpowerup post I linked above, PCIe 8X sacrifices almost no performance(1%)
even 4X didn't hurt more than something like 10%
So this is also why I'm posting this thread: I doubt whether the actual bandwidth is even 4x or 2x
Cuz the performance loss on eGPU, compared to the desktop alternative( Recent LTT Video) is almost 50%
Ain't sure how much difference came from the CPU

TechPowerUp on the GTX 1080 Ti (I think? It was the last one they did) also tested using PCIe x4 over the PCH and reported little performance impact compared to using 4 lanes from the CPU itself.

Edited by M.Yurizaki
Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, M.Yurizaki said:

I don't know any company advertising eGPU options,

Apple does, single handedly making everyone calling it eGPU
 

 

4 minutes ago, M.Yurizaki said:

Intel's CPUs typically provide 16, 28, or 40 lanes, depending on which one you're getting. So technically all of the PCIe lanes are not coming from the PCH. Just the ones for peripherals. The PCH system also has been around since 2009ish, and unless there's a dire need to update this, I don't think Intel will move away from it.

 

Again, I'm talking about the "all through the PCH" design on U series, I don't complain about desktop/HQ
It's always nicer to have extra io, especially when it doesn't cost you in terms of money, that's why I'm mentioning Ryzen's extra lanes.
Same logic applies for the PCH Hammering stuff, I do ask for a bit too much sometimes, I admit XD
Like, I'm ok with PCH running all storage, even with NVMes
But adding GPU through PCH? Ehhh probably no
Which is the point of this thread: Why intel's doing it?
 

 

8 minutes ago, M.Yurizaki said:

and reported little performance impact compared to using 4 lanes from the CPU itself.

Also what I'm curious about, I wonder how much is the CPU thing, and how much comes from bandwidth
Does GPU actually gets that much bandwidth? And how's the latency? I can't tell
(Latency does hurts performance

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Defconed said:

Apple does, single handedly making everyone calling it eGPU

From what I gather, this is a recent thing and the term eGPU has been around for a while.

 

15 minutes ago, Defconed said:

Again, I'm talking about the "all through the PCH" design on U series, I don't complain about desktop/HQ
It's always nicer to have extra io, especially when it doesn't cost you in terms of money, that's why I'm mentioning Ryzen's extra lanes.

That extra I/O will cost extra, especially high performance I/O. When you're expecting to sell hundreds of thousands of units, every penny counts.

15 minutes ago, Defconed said:

Same logic applies for the PCH Hammering stuff, I do ask for a bit too much sometimes, I admit XD
Like, I'm ok with PCH running all storage, even with NVMes
But adding GPU through PCH? Ehhh probably no
Which is the point of this thread: Why intel's doing it?

The question seems to imply Intel is advertising using eGPUs, which I don't think they are. Even for the gaming oriented NUCs (which they asked AMD for a GPU anyway so...)

 

15 minutes ago, Defconed said:

Also what I'm curious about, I wonder how much is the CPU thing, and how much comes from bandwidth
Does GPU actually gets that much bandwidth? And how's the latency? I can't tell
(Latency does hurts performance

PCIe is PCIe is PCIe. PCIe lanes from the PCH performs the same as from the CPU. Maybe not the same all things considered, but a PCH PCIe lane isn't different from a CPU PCIe lane in that the PCH lane doesn't have as much bandwidth or something (assuming it's using the same PCIe protocol). And considering TechPowerUp's article on PCIe scaling, going through the PCH seems to have little impact.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, M.Yurizaki said:

From what I gather, this is a recent thing and the term eGPU has been around for a while.

 

Me personally is not living in an English speaking environment.
From my observation, people used to call eGPU by its full name before the Apple showing off official eGPU solution, and everyone began calling it eGPU all at once
I would call that a significant influence, sry that I forgot to mention this

 

7 hours ago, M.Yurizaki said:

every penny counts.

As Intel Ark says, 8250u units sells at 297$, while 8400H sells at 250$, this work for the same even for 6Gen and 7Gen dual core U series, I don't see that as a cost-down problem, doesn't make too much sense.
It's gonna be easier to swallow it if you also stick to the opinion that "this saves power"
Which I also don't know if saves any meaningful power consumption, I'd be glad to be able to see someone tossing some numbers to the table

7 hours ago, M.Yurizaki said:

The question seems to imply Intel is advertising using eGPUs,

https://www.intel.ca/content/www/ca/en/io/thunderbolt/thunderbolt-3-usb-c-that-does-it-all-infographic.html?wapkw=thunderbolt&_ga=2.248076935.1037104860.1539652728-519787568.1529565926
imageproxy.php?img=&key=17c10421afdc6db8imageproxy.php?img=&key=17c10421afdc6db8imageproxy.php?img=&key=17c10421afdc6db8imageproxy.php?img=&key=17c10421afdc6db8imageproxy.php?img=&key=17c10421afdc6db8

1049825047_2018-10-1609_20_02.thumb.png.972f023de95113fb46acc35bd57a2a3f.png

Intel makes CPUs, Intel advertises TB3 with eGPU, Intel helps building thin&lights with TB3s
You can't say that Intel doesn't have the big picture in mind, they surely do, but they rather not talk about it

Which also leads to a new problem: did they claim that this "PCIe" is different in U series product page?( I didn't see it)

7 hours ago, M.Yurizaki said:

PCIe is PCIe is PCIe. PCIe lanes from the PCH performs the same as from the CPU


On a desktop system, Techpowerup was testing all that through a desktop PCH only
I've got people telling me that even the latency induced by PLX chips can hurt performance (>= 10%)

So I wonder if HSIO Multiplexing and Thunderbolt3 makes the latency/overall performance worse

This is also what I'm short on data

As you can see, LTT observed quite more perf loss than Techpowerup, I'm interested in whether it's more CPU induced or link induced
Perf don't run away for no reason right? Something must be happening

Edited by Defconed
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×