Jump to content

AMD: Future GPUs will significantly boost performance in 4K resolutions

I know it's not power consumption but Thermal Design Power. However, Nvidia's actual power usage is usually below it (atleast for maxwell) and AMD way above it. Same with CPU's. Haswells consume less than their TDP (i5-4690K has a system usage of about it's TDP) and AMD CPU's exceed their TDP by 2-3fold in actual power consumption.

Theres different ways of measuring the TDP AND POWER CONSUMPTION... unless AMD gives intel their cpus pre release and has intel give them a TDP measure meant or vice versa there never gonna be the off by the same margin... Same with Nvidia... It's not even fair to compare power consumption and on the box TDP it's not a scientific comparison not apples to apples at all. If you do and believe thats a fair way to compare you've got no scientific knowledge whatsoever

5820k4Ghz/16GB(4x4)DDR4/MSI X99 SLI+/Corsair H105/R9 Fury X/Corsair RM1000i/128GB SM951/512GB 850Evo/1+2TB Seagate Barracudas

Link to comment
Share on other sites

Link to post
Share on other sites

Theres different ways of measuring the TDP AND POWER CONSUMPTION... unless AMD gives intel their cpus pre release and has intel give them a TDP measure meant or vice versa there never gonna be the off by the same margin... Same with Nvidia... It's not even fair to compare power consumption and on the box TDP it's not a scientific comparison not apples to apples at all. If you do and believe thats a fair way to compare you've got no scientific knowledge whatsoever

I think of TDP as the amount of power the GPU wastes, and therefore its efficiency. From my understanding the R9 3xx series whilst having the same TDP rating as the R9 2xx series will perform a lot better (this is still hypothetical as they haven't been released and benchmarked), therefore showing that Ati has improved efficiency of their GPUs. (I call the graphics division Ati to distance them from the failure that is the AMD FX cpu-even though bought by AMD, it is Ati that is doing the R&D).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

 

Make that "impatient computer illiterates" and we have a deal.

 

 

 

http://www.prisguide.no/produkt/amd-radeon-r9-390x-225129

 

Prisguide have never ever listed a bogus graphics card.

 

I'm not sure, a while back there was an AMD employee who referred to the 380X as the "King of the Hill" which means the highest of the highest.

 

 

It's gonna be more than a 45% performance gap from 290x if all the rumors are correct we're looking at probably closer to a 60ish% gap It's nearly a 1.5x increase in stream processors, it's a giant increase in Vram bandwidth(and lets not forget AMD started using Nvidia's Delta compression method too in the 285), an updated architecture(and we should know from Nvidia just howmuch an architecture can change things), and the vastly improved reference cooler to stop overheating and help Overclocks honestly this thing could destroy things currently on the market if everything is true and accurate

 

That's TDP they are rated for not power consumption Linus has done videoes on this before even has 1 on tech quickie ALL ABOUT THIS... go look it up...

 

I'm just trying to give realistic numbers, based on stream processors alone. I mean everyone would love to see a 60% increase in performance from generation to generation but that hasn't happened before, at least not recently.

Link to comment
Share on other sites

Link to post
Share on other sites

Theres different ways of measuring the TDP AND POWER CONSUMPTION... unless AMD gives intel their cpus pre release and has intel give them a TDP measure meant or vice versa there never gonna be the off by the same margin... Same with Nvidia... It's not even fair to compare power consumption and on the box TDP it's not a scientific comparison not apples to apples at all. If you do and believe thats a fair way to compare you've got no scientific knowledge whatsoever

 

I know what TDP is and I'm not saying they should be identical because there is no standardized benchmark. I was stating a trivial fact that the TDP and Power consumption for Intel match, whereas for AMD it doesn't match in the slightest. Meaning TDP for AMD isn't a good estimator for actual power consumption and an utterly useless spec.

 

Wow, glad we aren't being incredibly dense about something trivial so I'd be forced to defend myself for countless posts. Atleast the person who claimed to have superior scientific knowledge knows how to spell, otherwise he'd look pretty stupid.

Link to comment
Share on other sites

Link to post
Share on other sites

100 percent BS and I can tell you this from personal experience OC'ing the ram on my R9 290. Bandwidth matters. Bandwidth at high resolutions/downsamploing REALLY matters. Hell it even matters when you keep the stock clock on the card like this guy did. 

 

The performance gains in that video are pretty marginal. But that could also have something to do with the R9-290 already having a pretty good bandwidth. 

Link to comment
Share on other sites

Link to post
Share on other sites

The performance gains in that video are pretty marginal. But that could also have something to do with the R9-290 already having a pretty good bandwidth. 

 

More to do with the clock which he kept at his cards stock 1000. You see the same thing on an Nvidia card. When you up the clock on the GPU core you want higher bandwidth.

 

The gains would be bigger if you increased both. That is why you see card makers increase both on aftermarket. 

 

That video was also 1080p. You would see bigger gains at higher resolutions, whether internal like downsampling or native. My R9 290 absolutely slaughters my sisters GTX 970 in Dolphin Emulator as far as downsampling. Why? Bandwidth. At 1080p? The cards are damn near identical. 1440p or SSAA/downsampling, mine is ahead by quite a bit in non Game Works games and is ahead when I turn down tessellation in catalyst (same reason the 750ti/9xx does so well and the 780/780ti look bad compared to the new gen cards in a Game Works game). 4k downsampling or native like Shadows of Mordor? My card kills hers. They are not even close. One can do a locked 30 perfectly and one is not close and had severe drops under 30.

 

She wanted a Mini itx though, so she got a 970. 

 

AMD is saying the bandwidth will specifically help 4k. It will. The 290's/x's are already kick ass 4k cards and these supposedly will double that bandwidth. They will absolutely annihilate 4k benchmarks if this is true. 

CPU:24/7-4770k @ 4.5ghz/4.0 cache @ 1.22V override, 1.776 VCCIN. MB: Z87-G41 PC Mate. Cooling: Hyper 212 evo push/pull. Ram: Gskill Ares 1600 CL9 @ 2133 1.56v 10-12-10-31-T1 150 TRFC. Case: HAF 912 stock fans (no LED crap). HD: Seagate Barracuda 1 TB. Display: Dell S2340M IPS. GPU: Sapphire Tri-x R9 290. PSU:CX600M OS: Win 7 64 bit/Mac OS X Mavericks, dual boot Hackintosh.

Link to comment
Share on other sites

Link to post
Share on other sites

I already aknowledged that.... doesn't change the fact they have label their TDP with a much wider margin than Intel/Nvidia. Fairly convinced during moderate use, the TDP of the 6300 isn't 95W TDP.

 

I'm digressing, was just mentioning their wider margin. That's all.

 

 

It's all based on averages.

 

A 6300 may reach 100+W on load, but at idle it could be at 80W.  

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

Put it this way, my GTX 970, with its extremely abnormal boost speeds (it was hitting way over 1400MHz with all of my overclocking programs disabled-I think its related to its stability issues) was not able to sustain a stable 60FPS in Minecraft at 1080p with Shaders installed, view distance at far, and every single option maxed out (optifine mod installed).

lol minecraft is totally cpu bottlenecked man

 

Link to comment
Share on other sites

Link to post
Share on other sites

Really? I figured AMD would release a product with worse performance than it's previous products, as is industry standard in all businesses and industries.....

 

 

unnecessary/s

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

lol minecraft is totally cpu bottlenecked man

 

It didn't max out any of the cores, it wasn't in my case. Maybe if I was running it on my Core 2 Duo or Xeon X5450, but not on my i5. And notice, I play it with shaders, GPU bound scenarios are possible if you use them.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I think of TDP as the amount of power the GPU wastes, and therefore its efficiency. From my understanding the R9 3xx series whilst having the same TDP rating as the R9 2xx series will perform a lot better (this is still hypothetical as they haven't been released and benchmarked), therefore showing that Ati has improved efficiency of their GPUs. (I call the graphics division Ati to distance them from the failure that is the AMD FX cpu-even though bought by AMD, it is Ati that is doing the R&D).

If power jumps and and power consumption is identical Power efficiency goes up that is the way it works I'd love in the future if somehow ATI could just keep the most powerful cards

 

I'm just trying to give realistic numbers, based on stream processors alone. I mean everyone would love to see a 60% increase in performance from generation to generation but that hasn't happened before, at least not recently.

I was about to say it used to be normal for graphics power to double with every new gen... Also it's not that unexpected considering think about 780Ti vs 980... If we go based off that logic 780Ti is Stronger than 980 by ALOT a single genof architecture difference makes it so 980 which has 2040 Cuda cores vs 780Tis 2880 

 

I know what TDP is and I'm not saying they should be identical because there is no standardized benchmark. I was stating a trivial fact that the TDP and Power consumption for Intel match, whereas for AMD it doesn't match in the slightest. Meaning TDP for AMD isn't a good estimator for actual power consumption and an utterly useless spec.

 

Wow, glad we aren't being incredibly dense about something trivial so I'd be forced to defend myself for countless posts. Atleast the person who claimed to have superior scientific knowledge knows how to spell, otherwise he'd look pretty stupid.

Typos sorry and I never said what you were stating was wrong I simply stated why I'd be ashamed if you had to post what 3 or 4 posts? If only the two of us could count that high I mean I can but it takes two to tango and with my luck you'll say you posted X posts and I'd say otherwise after all there is no standardized benchmark for this sort of thing ergo one of us must be dumber than the other by quite, a hefty margin... Or of course there is there is the possibility we are arguing over a useless thought sort of like TDP just something that should be forgotten about but people argue over it for no reason.

5820k4Ghz/16GB(4x4)DDR4/MSI X99 SLI+/Corsair H105/R9 Fury X/Corsair RM1000i/128GB SM951/512GB 850Evo/1+2TB Seagate Barracudas

Link to comment
Share on other sites

Link to post
Share on other sites

More to do with the clock which he kept at his cards stock 1000. You see the same thing on an Nvidia card. When you up the clock on the GPU core you want higher bandwidth.. 

 

You weren't around with the 600 kepler series? I remember because I owned a 670 and that card scaled immensely on Memory OC alone.

 

 

Typos sorry and I never said what you were stating was wrong I simply stated why I'd be ashamed if you had to post what 3 or 4 posts? If only the two of us could count that high I mean I can but it takes two to tango and with my luck you'll say you posted X posts and I'd say otherwise after all there is no standardized benchmark for this sort of thing ergo one of us must be dumber than the other by quite, a hefty margin... Or of course there is there is the possibility we are arguing over a useless thought sort of like TDP just something that should be forgotten about but people argue over it for no reason.

 

hqdefault.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Its good. We honestly need better 4k cards.

For those of us however who only just went to 1080p (i can't be the only one), 4K might as well be a long time coming, perhaps when there is a card that is as good at 4K, as a GTX 970 is at 1080p. (I bought a 1080p monitor specifically for my GTX 970, I only buy the screen type that will allow me to run everything maxed out, the way my GTX 650 Ti OC 2GB does with my old 1024x768 screen).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Spoiler alert, the titanic sinks

and you just spoiled the film to me

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×