Jump to content

Intel's new CEO Bob Swan rips off the band-aid to reveal some of the chip giant’s issues Thursday

poochyena
54 minutes ago, Feranoks said:

It's doubtfull that Intel or AMD would want to much reliance on the CPU for GPU performance. I think it would be a problem for both if they both basically lock out support for other brands.

 

Right now there is no issue if you want to use an Intel CPU and a AMD GPU. Even if Intel becomes big in the GPU market, them adding features (things that make an significant impact in general use)  that would only support their CPU's would be problematic for the GPU market as a whole.

At a certain point you're just a step away from calling it integrated graphics if CPU and GPU become bound in such a way and graphics cards become obsolete.

Yep, the first thing that would happen if you made the CPU and GPU in any way reliant on specifics would be to see benchmark results being skewed further in another several dimensions making it look bad for everyone.   

 

Companies need to bee seen as the clear winner legitimately, because they know when that does not happen people see through the shit and excuses at a second rate company making up for a poor performing product. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Feranoks said:

It's doubtfull that Intel or AMD would want to much reliance on the CPU for GPU performance. I think it would be a problem for both if they both basically lock out support for other brands.

 

Right now there is no issue if you want to use an Intel CPU and a AMD GPU. Even if Intel becomes big in the GPU market, them adding features (things that make an significant impact in general use)  that would only support their CPU's would be problematic for the GPU market as a whole.

At a certain point you're just a step away from calling it integrated graphics if CPU and GPU become bound in such a way and graphics cards become obsolete.

IF like links for cpu-gpu connection is coming and fast, we have as possibilities, IF, CCIX and genZ, and amd is on board with all of them, though we haven't seen any official response for if epyc 2/ ryzen 3000 will support either of them, though Xilinx did hint at epyc supporting ccix, now the vega and IF thing is not over pcie but through the link on top of the card, though the gpu might be able to support it via pcie they might be waiting for epyc's release to talk about that.

now this much faster links would change gaming quite a bit, though amd is probably too small to do it alone (it would probably mean quite a few changes to the game engines), thats why i believe the main ones will end up being CCIX and genZ as those are not locked in by a vendor, 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cj09beira said:

now this much faster links would change gaming quite a bit

In what way? Current GPUs aren't significantly affected by existing PCIe bandwidth. What would you use that bandwidth for?

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, porina said:

In what way? Current GPUs aren't significantly affected by existing PCIe bandwidth. What would you use that bandwidth for?

problem is that currently the bandwidth and latency is so bad it can't be used for anything meaningful, with the much better connection the other protocols offer games could start to for example stream much more data in real time to and from the gpu, it would allow for example for games to use more complex assets that would be streamed to the gpu as needed, instead of having all assets loaded at once and then kept there, there are probably much cooler things that could be done but i am no dev

Link to comment
Share on other sites

Link to post
Share on other sites

If AMD slips up now, it'll be one of the biggest flops ever in the CPU industry..

 

Intel is not going to die right away, but it's looking like it will take a significant hit.


If my answer got you to your solution make sure to 'Mark Resolved!
( / . _ . / )

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, cj09beira said:

problem is that currently the bandwidth and latency is so bad it can't be used for anything meaningful, with the much better connection the other protocols offer games could start to for example stream much more data in real time to and from the gpu, it would allow for example for games to use more complex assets that would be streamed to the gpu as needed, instead of having all assets loaded at once and then kept there, there are probably much cooler things that could be done but i am no dev

PCIe 3.0 16x is 15.75GB/s (and 4.0 would double that). Dual channel DDR4-2666 is 41.6 GB/s. 2080 Ti onboard ram bandwidth 616GB/s.

 

Even if the GPU had much faster connectivity externally, what else is there to talk to at those high speeds? If you're not keeping it on card, it either has to sit in system ram and steal a chunk of that, or on storage and the bandwidth of that is far far lower.

 

The only use I can think of, which isn't for mainstream gamers, is to allow multiple cards to share vram between them. Two cards does get you effectively double the vram. Think nvlink on the consumer cards is 50GB/s from memory. The pro cards I think have a higher spec version but I never looked in detail.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Feranoks said:

Right now there is no issue if you want to use an Intel CPU and a AMD GPU.

In the future it depends as more potent Interfaces than PCIe are up and coming.

It is rumored that AMD Rome might support GenZ and/or CCIX.

Xlinx already spoilered that AMD might have Support for CCIX in Rome:

https://www.servethehome.com/xilinx-alveo-u280-launched-possibly-with-amd-epyc-ccix-support/

 

That is a free and open Standard. If both use that, there is no Problem.

Its also possible that Intel does something that is based on the official Standard and extends it with some proprietary stff while retaining Compatibility to the free/open Standard.

 

7 hours ago, Feranoks said:

At a certain point you're just a step away from calling it integrated graphics if CPU and GPU become bound in such a way and graphics cards become obsolete.

No, I'm saying that there are some very potent STANDARD Interfaces coming that are way better than PCIe and are already worked on by various other companys - like Xilence - that have to be supported.

 

We all know what nVidia thinks about Free and Open Standards.

They even ran on stage and put a fence on a free and open standard that is worked on and claim it for themselves. After a couple of years they at least opened up a bit to the free/open Standard that is part of the Connection Specification...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, porina said:

In what way? Current GPUs aren't significantly affected by existing PCIe bandwidth. What would you use that bandwidth for?

The new Interface are for example cache coherent, offer possibly lower latency. Plus Higher Bandwith. IIRC up to 25GT/Sec per Lane/Link, so a bit over PCIe.

 

 

The Advantage would be that something like "HBCC" gets a bit more performant and streaming Textures from the main Memory gets less shitty, so that if the VRAM is full, the GPU Performance doesn't totally tank or crash.

 

Its a bit the Chicken<-> EGG Problem that is also based on the disadvantages of PCIe (IIRC rather high latency)...

 

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Stefan Payne said:

The Advantage would be that something like "HBCC" gets a bit more performant and streaming Textures from the main Memory gets less shitty, so that if the VRAM is full, the GPU Performance doesn't totally tank or crash.

If you start putting more assets in system ram, then you also need more system ram to make a difference. You're just shifting the problem from one place to another.

 

27 minutes ago, Stefan Payne said:

Its a bit the Chicken<-> EGG Problem that is also based on the disadvantages of PCIe (IIRC rather high latency)...

I'm not sure latency is a big problem in this area. If the assets are predictable they could be pre-loaded to some extent. Latency would matter more if there was much more random small data access going on.

 

I suppose I can see an argument for bandwidth if, for example, you bought a 4GB card and wanted to run 8GB effective vram by using system ram to supplement also. The entire 8GB would need to be accessed for every frame, so the less time you can replace the other 4GB in the better. The problem then becomes you need to have that 4GB spare in system ram. A budget builder who picked the 4GB card might not have that spare ram. Also, even if you make the bus interface much faster, you're going to really start eating into ram bandwidth if you do that. Dual channel 3200 works out around 50GB/s, and CPU and other tasks need their cut of bandwidth also. Again, a budget builder might opt for slower ram, so that could be the limit.

 

For now I think the best strategy remains to keep the data within the limits of the video card so you don't have to worry about all this mess. Get a card with more vram if you need it. There are reasons for having high bandwidth interconnects, I just really don't think gaming is one of them.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×