Jump to content

NVIDIA Fires Shots at AMD’s 7nm Tech - Claims "Can Create Most Energy-efficient GPU in the World Anytime"

3 minutes ago, Stefan Payne said:

Is there someone in the world, that nVidia didn't piss off yet??


I mean they alredy pissed off Sony and Microsoft. That are two big companies that might not touch them any time soon (in Consoles) or ever again...

maybe themselves, and even then...

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, GoodBytes said:

Yes, it is easy to create the most efficient GPU or anything of all time... the problem is that it needs to be good. If it a potato consuming 0.5W that has trouble playing minesweeper... then well, no one cares.

Hey. My Raspberry Pie is offended by that comment!

Link to comment
Share on other sites

Link to post
Share on other sites

Picometer dies incoming 

Phone 1 (Daily Driver): Samsung Galaxy Z Fold2 5G

Phone 2 (Work): Samsung Galaxy S21 Ultra 5G 256gb

Laptop 1 (Production): 16" MBP2019, i7, 5500M, 32GB DDR4, 2TB SSD

Laptop 2 (Gaming): Toshiba Qosmio X875, i7 3630QM, GTX 670M, 16GB DDR3

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, cj09beira said:

Considering how much cooperation it took to get 7nm parts, my guess is that the differences are quite small, as they would have to change quite a bit to get it to work, though vega 20 was the first 7nm part made by amd so their is a good chance had they done it latter on it would be better due to the experience they got from making it

Well it really depends on what the goal is for moving to 7nm, shrinking the node for the original arch design. Are you shrinking to improve power efficiency, reduce die area (cost) or to create a higher performance product (which comes in two possible ways). To increase performance you either use that better node to increase clocks (Vega 20) or you increase the GPU size, basically transistor count, using the same die area. Vega 20 is the former, increasing clocks.

 

Increasing clocks only works so well, it doesn't solve resource utilization problems and can even make them more pronounced. More pronounced doesn't mean worse just that it may make it more evident to see an issue once hidden by other factors.

 

Node sizes don't necessarily have a big impact on architecture design either, there's still some fundamental important factors, but unless you're making a die akin to Volta where you're making dies as big as technologically possible a smaller node doesn't as directly translate to a different architecture design. Vega as an example could likely be improved in many ways that would likely result in a larger die, those improvements probably weren't deemed justified for the increased cost of a larger die.

 

Who knows, working within a limit does influence design and if that limit is removed or shifted choices made previously might be significantly different under these new conditions. Vega 20 to me is just a highly overclocked Vega with some firmware and silicon bugs fixed. I'm reluctant to use it as a metric for how good products 7nm can be but I'm also reluctant to expect much more than Vega 20 within the same die area and power budget i.e. is a 300mm2 Vega 20 @ 300W 10TFLOPs and 300mmNavi @ 300W 10TFLOPs or 12TFLOPs or 15 TFLOPs.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

AMD is guilty of "Poor Volta" though, I think they learned from that mistake. What was most amusing about that to me was that Volta was never a gaming GPU lol.

Having had a discussion about this recently, Raja & Crew were technically correct about the Vega vs Volta comparisons. The problem is that in the gaming space, it was Vega vs Pascal and they simply lacked the proper balance in the GPU to compete within the same power envelope. 

 

Vega bests Volta in most things, also being massively more effective per area and per watt. There's a reason Nvidia didn't roll out more than the GV100. However, that means AMD more or less spent money mocking a product that never launched to anyone that really cares about marketing. That's the real facepalming part about it.

 

Ampere, Nvidia's 7nm Compute GPU, will go up against the Vega 20. Be interesting to see what they've added.

Link to comment
Share on other sites

Link to post
Share on other sites

I'd have more confidence in AMD if they would release an entire graphics card line at once, instead of one or two versions of the same model at a time.

 

Get your shit together AMD.

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, leadeater said:

Well it really depends on what the goal is for moving to 7nm, shrinking the node for the original arch design. Are you shrinking to improve power efficiency, reduce die area (cost) or to create a higher performance product (which comes in two possible ways). To increase performance you either use that better node to increase clocks (Vega 20) or you increase the GPU size, basically transistor count, using the same die area. Vega 20 is the former, increasing clocks.

 

Increasing clocks only works so well, it doesn't solve resource utilization problems and can even make them more pronounced. More pronounced doesn't mean worse just that it may make it more evident to see an issue once hidden by other factors.

 

Node sizes don't necessarily have a big impact on architecture design either, there's still some fundamental important factors, but unless you're making a die akin to Volta where you're making dies as big as technologically possible a smaller node doesn't as directly translate to a different architecture design. Vega as an example could likely be improved in many ways that would likely result in a larger die, those improvements probably weren't deemed justified for the increased cost of a larger die.

 

Who knows, working within a limit does influence design and if that limit is removed or shifted choices made previously might be significantly different under these new conditions. Vega 20 to me is just a highly overclocked Vega with some firmware and silicon bugs fixed. I'm reluctant to use it as a metric for how good products 7nm can be but I'm also reluctant to expect much more than Vega 20 within the same die area and power budget i.e. is a 300mm2 Vega 20 @ 300W 10TFLOPs and 300mmNavi @ 300W 10TFLOPs or 12TFLOPs or 15 TFLOPs.

Vega 20 has the PCIe over IF bridge tech and very beefy FP units. There's room for more TFLOPs on the node, but that comes from design improvements. In the case of gaming, you can save a lot of die space by cutting it down to a Gaming GPU. We'll see what happens. We can expect reasonably expect a ~230mm2 gaming 7nm dGPU to come in between the Vega 56 and 64 just from the node improvements and stripped out compute.

 

The question is what Big Navi will do. Big Pascal got canceled and Vega was used for both tasks. AMD tried to cover both by making a Compute design and adding a new geometry pathway (NGG Fast Path) was supposed to deal with, but it appears that was busted at the silicon level. We'll end up seeing interesting aspects for how AMD addresses it because the Xbox One X exists, and that GPU solves a lot of the issues with GCN. If they can run out Big Navi as a multi-role die with enough gaming balance (like Nvidia does), they can still fill out their compute and professional lines while covering the high-end gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, ravenshrike said:

Now do compute. You know, the area in which power efficiency actually matters to the end consumer.

This argument only works when you compare Nvidia's gaming cards vs AMDs, not when you compare their actual compute product, and even then only in some compute workloads and synthetics. 

 

Like AMD is not great at Geekbench,

106246.png

 

And power consumption does matter in the gaming space as well, mobile computing a thing. Not wanting a 400W space heater for a graphics card is a thing. Paying your electric bill and caring about climate change is a thing.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Taf the Ghost said:

Having had a discussion about this recently, Raja & Crew were technically correct about the Vega vs Volta comparisons. The problem is that in the gaming space, it was Vega vs Pascal and they simply lacked the proper balance in the GPU to compete within the same power envelope. 

 

Vega bests Volta in most things, also being massively more effective per area and per watt. There's a reason Nvidia didn't roll out more than the GV100. However, that means AMD more or less spent money mocking a product that never launched to anyone that really cares about marketing. That's the real facepalming part about it.

 

Ampere, Nvidia's 7nm Compute GPU, will go up against the Vega 20. Be interesting to see what they've added.

i think that was mostly due to them expecting the ngg to be working well, which would have put vega on top, but it did not.

about xbox one X what does it have that you are considering special?, its the perfect config in terms of compute and graphics performance but it cant be scaled up without more shader engines or faster blocks

18 minutes ago, Chett_Manly said:

This argument only works when you compare Nvidia's gaming cards vs AMDs, not when you compare their actual compute product, and even then only in some compute workloads and synthetics. 

 

Like AMD is not great at Geekbench,

106246.png

 

And power consumption does matter in the gaming space as well, mobile computing a thing. Not wanting a 400W space heater for a graphics card is a thing. Paying your electric bill and caring about climate change is a thing.

specially with the rtx cards having tensor cores it will depend on what kind of compute we are talking about, vega 20 will be miles faster in fp64, fp32 it should be competitive, fp16 the tensor cores should mean nvidia is ahead, the challenge of comparing compute performance is that amd is still suffering quite a bit from lack of optimization, support for Rocm is growing very well but its still missing from some of the most used benches

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Jack_of_all_Trades said:

I agree then, would love nvidia to get off its shitty high horse and for that we need as much competition as possible.

Naa, they were always on the high horse.

 

And for whatever reason supported by the Media. That is how they could gain popularity back in the day as they hyped useless features like "32bit rendering", wich was not viable back in the day of Voodoo 3 and TNT2. The Performance penalty was just too big and the framerates were on the low end anyway.

 

The High Horse was for example old interviews about the implementation of settings for the TV Encoder - wich they specified with the Reference Designs that they did - and said that it was thing of the Board Manufacturers...


Yeah, if you bring a new driver every other week or so and have to rewrite big amounds, that's a very viable idea. Thus only the ELSA Driver implemented that - but they were 3-6 Months behind.

So for normal People that meant investing in an external Tool the "TV Tool", to allow setting the size of the Picture on the TV Screen...

 


Anyway, the buttom Line was:
nVidia never was the "nice guy", they were always the way they are right now. Nothing has changed.

Well, they might have become more bold and worse...

But without Support from Game Magazine and all the other Online Publications of the mid/late 90s (Riva TNT was released in 1998, the first viable 3D Accelerator was released in 1996 (Voodoo Graphics. Everything before that was useless junk and called "3D Decelerators" or didn't support Filtering)

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Chett_Manly said:

And power consumption does matter in the gaming space as well, mobile computing a thing. Not wanting a 400W space heater for a graphics card is a thing. Paying your electric bill and caring about climate change is a thing.

a) could you stop the 400W Space Heater FUD? Because that is just not true. 400W is the whole system, not just the Graphics card

b) nobody really cares about the Power COnsumption (well only a hand full), for the rest its just some pseudo-argument to use against "the other side".

 

Wanna bet thet Navi will be very competitive in Power Consumption?
And what would you say if Navi would perform the same as 2070 but consume half the Power? Power COnsomption still important? Or are you more arguing about the superior Driver Quality from nVidia or other nonsense?

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

Looks like these Corporate Pigs have been scamming us all along if they going to brag they have features not released to the public yet.

CPU i7 4960x Ivy Bridge Extreme | 64GB Quad DDR-3 RAM | MBD Asus x79-Deluxe | RTX 2080 ti FE 11GB |
Thermaltake 850w PWS | ASUS ROG 27" IPS 1440p | | Win 7 pro x64 |

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Stefan Payne said:

a) could you stop the 400W Space Heater FUD? Because that is just not true. 400W is the whole system, not just the Graphics card

b) nobody really cares about the Power COnsumption (well only a hand full), for the rest its just some pseudo-argument to use against "the other side".

 

Wanna bet thet Navi will be very competitive in Power Consumption?
And what would you say if Navi would perform the same as 2070 but consume half the Power? Power COnsomption still important? Or are you more arguing about the superior Driver Quality from nVidia or other nonsense?

About 300W is still from the GPU alone, and the radeon VII heat and noise was a valid complaint most reviewers had despite the card having a triple fan cooler.

No plenty of people that pay for their own power, and companies care about the power consumption it isn't just something to use "against the other side".  There would be more small form factor cards from AIB's if AMD GPU's consumed less power, and more laptop OEM's would be using Radeon gpu's.

 

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Blademaster91 said:

About 300W is still from the GPU alone, and the radeon VII heat and noise was a valid complaint most reviewers had despite the card having a triple fan cooler.

No plenty of people that pay for their own power, and companies care about the power consumption it isn't just something to use "against the other side".  There would be more small form factor cards from AIB's if AMD GPU's consumed less power, and more laptop OEM's would be using Radeon gpu's.

 

point is its not 300w extra, at most its like 50w extra, which is not much ps aib cards from nvidea can use the same amount of  power

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Chett_Manly said:

And power consumption does matter in the gaming space as well, mobile computing a thing. Not wanting a 400W space heater for a graphics card is a thing. Paying your electric bill and caring about climate change is a thing.

Geekbench is not an appropriate benchmark for compute. Look at specviewperf and other specific tests using actual compute algorithms on known test sample data sets.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Taf the Ghost said:

Vega bests Volta in most things, also being massively more effective per area and per watt.

It doesn't though, and on top of that Volta has full FP64 capabilities and Vega did not, that was only added later in Vega 20. Volta has higher raw FP32 and FP16 with nearly double the FP64 plus Volta had nearly twice the memory bandwidth until Vega 20. Volta also obviously supports CUDA which is an unquestionable advantaged until OpenCL development community and resources gets large enough.

 

With the additional Tensor capabilities there is actually nothing compelling or better about Vega other than price.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

With the additional Tensor capabilities there is actually nothing compelling or better about Vega other than price.

OSX compatibility.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

It doesn't though, and on top of that Volta has full FP64 capabilities and Vega did not, that was only added later in Vega 20. Volta has higher raw FP32 and FP16 with nearly double the FP64 plus Volta had nearly twice the memory bandwidth until Vega 20. Volta also obviously supports CUDA which is an unquestionable advantaged until OpenCL development community and resources gets large enough.

 

With the additional Tensor capabilities there is actually nothing compelling or better about Vega other than price.

Huh, I clearly wasn't awake when I made that comment originally. There's too many Vegas and I got them confused together. My bad.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, BiG StroOnZ said:

 

Source: https://www.eteknix.com/nvidia-fire-shots-at-amds-7nm-graphics-technology/

 

Intel's entrance into the dGPU space can't come soon enough (2020). It's one thing smacking around AMD with your product stack's performance alone. It's another thing being entirely egotistical about it and getting too comfortable or content. Either way, I'm wondering if this overconfidence is a bit of foreshadowing of what to expect from another future architecture from NVIDIA, like the supposed Ampere. 

Jensen needs to put up or shut up.

 

If he says that NVIDIA can create the most energy efficient GPU in the world "at any time"? Well then fucking do it.

 

Also, no shit it's easier for them, their R&D budget is massive compared to AMD's.

 

Now, I have no preference in terms of tech - I've bought both AMD and NVIDIA GPU's. I want to see AMD compete and beat NVIDIA on a level playing ground. I hope AMD comes out of left field with Navi and really just hits a home run.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Drak3 said:

OSX compatibility.

Only very recently supported, also it's offical name is just macOS now not Mac OS X.

 

That's not even relevant really either, if you're comparing to a Volta card then macOS isn't an option for that demographic anyway. Not unless you want to deploy a bunch of Docker containers, which means you may as well have just used Linux in the first place.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

Only very recently supported, also it's offical name is just macOS now not Mac OS X.

Not "very recently." Supported since High Sierra.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Drak3 said:

Not "very recently." Supported since High Sierra.

Well that's where it gets questionable, because there were significant problems until 10.13.2 with issues even after that. You could only get it in the iMac Pro (far as I know) until 10.13.4 when official eGPU support for Vega was added. To this day there is still fan controller and power table issues for Vega under macOS but at least some very nice people have created utilities to fix that.

Link to comment
Share on other sites

Link to post
Share on other sites

CEO/PR shit talks competition while talking up their own product.   Pretty sure that's par for the course, what am I missing?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, mr moose said:

CEO/PR shit talks competition while talking up their own product.   Pretty sure that's par for the course, what am I missing?

Nothing but the topic was posted here on LTT which is significantly pro AMD so now we will have obligatory several pages of people bashing NVIDIA to even out the score ?

CPU: i7 6950X  |  Motherboard: Asus Rampage V ed. 10  |  RAM: 32 GB Corsair Dominator Platinum Special Edition 3200 MHz (CL14)  |  GPUs: 2x Asus GTX 1080ti SLI 

Storage: Samsung 960 EVO 1 TB M.2 NVME  |  PSU: In Win SIV 1065W 

Cooling: Custom LC 2 x 360mm EK Radiators | EK D5 Pump | EK 250 Reservoir | EK RVE10 Monoblock | EK GPU Blocks & Backplates | Alphacool Fittings & Connectors | Alphacool Glass Tubing

Case: In Win Tou 2.0  |  Display: Alienware AW3418DW  |  Sound: Woo Audio WA8 Eclipse + Focal Utopia Headphones

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, Lathlaer said:

Nothing but the topic was posted here on LTT which is significantly pro AMD so now we will have obligatory several pages of people bashing NVIDIA to even out the score ?

I'm not bashing on Nvidia, I just think they are an evil company purposefully withholding the best they can offer and charging more than which the products are worth.... hmm I guess that does sound like I'm bashing Nvidia ?

 

Seriously though I don't think the forum is pro AMD, I think it's more anti Intel and Nvidia than that. Though that hate is a tad bipolar when you see comments hailing Intel's entry in to the GPU market lol. We should all keep in mind Intel sells 2k+ CPUs that aren't for servers, Intel is sure to be the shining knight that comes in to save you... for a price.

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×