Jump to content

Polaris “Ellesmere” Has Around 100W TDP + specs - Rumour

Paragon_X
13 hours ago, Enderman said:

Do you know what the P stands for in GPU and CPU?

go find out

 

hint:  its the same in both

 

and look at the difference between a 6950 and 7950, or 6970 and 7970, it is not a 40% difference

or look at the GTX 600 series vs 700 series, also not a 40% difference

 

I'm not talking about performance density. I'm talking about performance.

sure you can have a processor with half the die area and twice the performance per area, it will still not be a performance increase

28200+ posts, and that is the best argument you got.

 

newsflash. you know what the P in a FP stands for?

its the same, but that doesnt mean the product is the same.

 

Also, i notice that your reading comprehension has deteriorated over the past few years. Seeing as i said 20-40%, which means, i am right with my statement.

But do not take my word for it, take Anandtechs word for it.

6970 vs 7970 (not Ghz)

http://anandtech.com/bench/product/1061?vs=1032

My god, is that a 40%+ increase?! Oh wait, yes it is.

 

now, i am using my tablet, so bringing up the correct Nvidia cards are a bit hard, beacuse Nvidia has a habit of using different architectures and nodes within a generation. And i simply dont remember which is which.

 

BUT, i do know this. 

45nm Westmere/Nehalem vs 32nm Sandy Bridge

http://anandtech.com/bench/product/157?vs=287

 

The 970 is the closest i7 that isnt way overkill in terms of price comparison to the 2600k. It is still a 6c/12t unit, so that is why it pulls ahead in multi-threaded tasks. 

Still, between 10-30% boost in performance depending on which test they are running. And sure, not everything improves with each generation. But that goes for all industries, not just chip-makers. And CPUs are hard to boost by a lot just by shrinking them, as they generally focus on other things then raw performance. (just look at TDP values. They have dropped by a HUGE amount since intels 45nm products came out)

 

Again, you are just threading the water in hope of scoring a moot point. You have yet to bring any solid evidence to the table other then your own misinformed opinions and snide remarks. Neither of which is very effective at proving anything.

 

Given the time you have been on these forums, one would assume you would have learned something. However, that certainly does not seem to be the case.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, VagabondWraith said:

Noticed Vega only has 64 ROPS again? Not good if true.

 

Power Consumption would be through the roof if they had 128 ROPS. Not to mention adding insult to injury by adding loads more transistors.
 
But I think it's a die size problem.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, ivan134 said:

R9 Fury performance for only 100W? Sounds too good to be true.

According to tsmc "TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology" 

 

Since tsmc 16nm and samsung/globalfoundries 14nm are pretty similar,  it would be possible for a 28nm, 300w card to be around 100w on 14nn.  Dont forget this is not a simple die shrink,  but since we are actually skipping 20nm entirely,  its more like a dual die shrink 

 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Enderman said:

Do you know what the P stands for in GPU and CPU?

go find out

 

hint:  its the same in both

 

and look at the difference between a 6950 and 7950, or 6970 and 7970, it is not a 40% difference

or look at the GTX 600 series vs 700 series, also not a 40% difference

This guy has to be trolling...

 

Firstly, there was no die shrink between the 600 and 700 series Nvidia cards... The 580 was 40nm, and the 680 was 28nm.

And if you look at the benchmarks below, you'll see a performance increase of ~35%.

 

And if you look at the benchmarks for the 6970 vs 7970 you'll see a performance increase of.... can you guess?

Spoiler: ~35%

 

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/7

 

http://www.hardocp.com/article/2012/01/30/amd_radeon_hd_7950_video_card_review/8#.VxDjmrVf2Hs

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Prysin said:

Also, i notice that your reading comprehension has deteriorated over the past few years. Seeing as i said 20-40%, which means, i am right with my statement.

But do not take my word for it, take Anandtechs word for it.

6970 vs 7970 (not Ghz)

http://anandtech.com/bench/product/1061?vs=1032

My god, is that a 40%+ increase?! Oh wait, yes it is.

The problem is that you're not properly comparing them.

The 7000 series has DX11 which 6000 doesnt support

If you compare the 6970 an 7970 or 6950 and 7950 both on DX9, they are almost the same

crysis-2-2560.png

and this is why the improvement is not that the GPU is 40% better, maybe more like 4%, but the API is much better

 

It's just that the newer 7000 GPUs support a newer API

 

So yeah, please do some research next time before insulting people to make yourself feel better

 

3 hours ago, -BirdiE- said:

This guy has to be trolling...

 

Firstly, there was no die shrink between the 600 and 700 series Nvidia cards... The 580 was 40nm, and the 680 was 28nm.

And if you look at the benchmarks below, you'll see a performance increase of ~35%.

 

And if you look at the benchmarks for the 6970 vs 7970 you'll see a performance increase of.... can you guess?

Spoiler: ~35%

 

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/7

 

http://www.hardocp.com/article/2012/01/30/amd_radeon_hd_7950_video_card_review/8#.VxDjmrVf2Hs

The difference is not 35% GPU performance, that is due to DX11 as you can see in the benchmark above

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Enderman said:

The problem is that you're not properly comparing them.

The 7000 series has DX11 which 6000 doesnt support

If you compare the 6970 an 7970 or 6950 and 7950 both on DX9, they are almost the same

crysis-2-2560.png

and this is why the improvement is not that the GPU is 40% better, maybe more like 4%, but the API is much better

 

It's just that the newer 7000 GPUs support a newer API

 

So yeah, please do some research next time before insulting people to make yourself feel better

 

The difference is not 35% GPU performance, that is due to DX11 as you can see in the benchmark above

https://en.wikipedia.org/wiki/Radeon_HD_7000_Series

 

Rendering support
Direct3D Direct3D 11.x, Direct3D 12.0 feature level 11_1 or 12_0 (GCN only)
OpenCL OpenCL 1.2 (2.0 for Radeon HD 7790)
OpenGL OpenGL 4.5[1]

 

https://en.wikipedia.org/wiki/Radeon_HD_6000_Series

 

Rendering support
Direct3D Direct3D 11
Shader Model 5.0
OpenCL OpenCL 1.2
OpenGL OpenGL 4.4[1]

 

https://en.wikipedia.org/wiki/Radeon_HD_5000_Series

Rendering support
Direct3D Direct3D 11
Shader Model 5.0
OpenCL OpenCL 1.2
OpenGL OpenGL 4.4[1]

 

https://en.wikipedia.org/wiki/Radeon_HD_4000_series

Rendering support
Direct3D Direct3D 10.1
Shader Model 4.1
OpenCL OpenCL 1.1 win8-win7-vista sp2 1.1 , xp sp3 1.0
OpenGL OpenGL 3.3

 

are we done? Yes we are done.

 

strawman.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Prysin said:

And like you can see in the benchmark, it sucks at DX11 and is much better at DX9

Which proves that it is not a GPU improvement but mainly an API improvement, so you are still wrong :)

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Enderman said:

And like you can see in the benchmark, it sucks at DX11 and is much better at DX9

Which proves that it is not a GPU improvement but mainly an API improvement, so you are still wrong :)

you know how atrocious AMDs DX9 drivers are?

Good god, you have no idea do you?

 

Let me give you a hint. The last optimization they did for DX9, prior to Crimson, was when DX9 was still the dominant API.

After that they stopped optimizing and went 100% with DX10 and DX11. As a result, 7000 series which came out after DX11 adoption rate had reached "critical mass" has nearly NO DX9 optimizations done to it. Hell, prior to Crimson drivers, my R9 295x2 was shitting itself in skyrim and GW2 (both DX9), yet when play a DX11 title it can beat a TitanX.

 

I know, because ive owned some HD 7950s, and they suck fucking balls in every DX9 game, yet once you apply a newer version of DX, they rock on.

 

But hey, keep at it strawman. It's not like its hard to disprove you anyways.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Prysin said:

 

It's funny cause you keep opening your mouth and no proof comes out... :P

Are you going to claim all toms hardware benchmarks are fake? Or that they used flawed cards? What's your next excuse? It's fun to see what your brain comes up with, such as "40% improvement" lol

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Enderman said:

The difference is not 35% GPU performance, that is due to DX11 as you can see in the benchmark above

DX9-1.pngDX9-2.pngDX9-3.pngDX9-4.png

 

Care to try again?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, -BirdiE- said:

Care to try again?

clearly you have trouble scrolling down and reading the rest of the topic before commenting.

 

the reason the 7000 series is better is not because the GPU is 40% more powerful, it is because of better DX11 support

the smaller process did not magically make the GPU performance increase that much, it was the API advancements

all the benchmarks you posted are on DX11 which is why the 7970 is much better than the 6970

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Enderman said:

clearly you have trouble scrolling down and reading the rest of the topic before commenting.

 

the reason the 7000 series is better is not because the GPU is 40% more powerful, it is because of better DX11 support

the smaller process did not magically make the GPU performance increase that much, it was the API advancements

all the benchmarks you posted are on DX11 which is why the 7970 is much better than the 6970

Clearly you don't realize all those are all DX9 benchmarks.

Link to comment
Share on other sites

Link to post
Share on other sites

Looking forward to Vega for sure and this efficiency looks great for next cards. Looking forward to upgrade :)

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

http://anandtech.com/bench/product/1061?vs=1032

 

Crysis Warhead is built on Cry-Engine2 like original Crysis. And is DX9....

 

lets go back here and look at non DX11 or DX stuff at all.

 

Sony Vegas 12. It uses OpenCL 1.2 Which is supported by BOTH GPUs equally.

7970 -> 24 seconds

6970 -> 34 seconds

 

https://www.techpowerup.com/gpudb/296/radeon-hd-7970

Shading Units: 2048
TMUs: 128
ROPs: 32
Compute Units: 32
Pixel Rate: 29.60 GPixel/s
Texture Rate: 118.4 GTexel/s
Floating-point performance: 3,789 GFLOPS

 

 

https://www.techpowerup.com/gpudb/258/radeon-hd-6970

Shading Units: 1536
TMUs: 96
ROPs: 32
Compute Units: 24
Pixel Rate: 28.16 GPixel/s
Texture Rate: 84.5 GTexel/s
Floating-point performance: 2,703.4 GFLOPS

 

Find anything peculiar here.... like the fact that the GPixel/s doesnt really improve much? Well, thats due to equal number of ROPs, but with more TMUs, 33% more to be precise, the Texture Rate of the 7970 is 40.1% higher then the 6970....

The compute performance, AKA GLFOPS, is 40.15% higher with the 7970...

The amount of shaders is....33.33% higher on the 7970 vs the 6970.

 

I think we are starting to see a pattern here...

 

lets see., what was it. oh yes..

On 14/04/2016 at 4:35 PM, Prysin said:

Every shrink, for the last three times, has given a flat increase in performance of 20-40%.

 

I'll accept your apology.

strawman.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Prysin said:

 

Crysis Warhead is built on Cry-Engine2 like original Crysis. And is DX9....

 

While I fully support your argument, Crysis and Crysis Warhead (when set to Very High or Enthusiast settings) use DX10. 

LTT Unigine SUPERPOSITION scoreboardhttps://docs.google.com/spreadsheets/d/1jvq_--P35FbqY8Iv_jn3YZ_7iP1I_hR0_vk7DjKsZgI/edit#gid=0

Intel i7 8700k || ASUS Z370-I ITX || AMD Radeon VII || 16GB 4266mhz DDR4 || Silverstone 800W SFX-L || 512GB 950 PRO M.2 + 3.5TB of storage SSD's

SCHIIT Lyr 3 Multibit || HiFiMAN HE-1000 V2 || MrSpeakers Ether C

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Masada02 said:

While I fully support your argument, Crysis and Crysis Warhead (when set to Very High or Enthusiast settings) use DX10. 

nop. Warhead allows usage of DX9 with enthusiast settings. I already checked. Source below

 

https://en.wikipedia.org/wiki/Crysis_Warhead

Quote

EA announced that the game's minimum requirements are nearly identical to the minimum requirements of the original Crysis (except for the HDD capacity, which is now 15 GB). However, the "Gamer" and "Enthusiast" ("High" and "Very High", respectively, in the original Crysis) configurations require less powerful machines than before (as IGN confirmed in their review), allowing a user to run the "Enthusiast" settings in DirectX 9 mode (and on Windows XP).[10]

Like Crysis, Warhead uses Microsoft Direct3D for graphics rendering.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Prysin said:

nop. Warhead allows usage of DX9 with enthusiast settings. I already checked. Source below

 

https://en.wikipedia.org/wiki/Crysis_Warhead

 

Ok then, haha. The original Crysis was DX10 at Very High though. At least that's what I remember; having those setting greyed out for me for the longest time until I actually got a decent comp.

LTT Unigine SUPERPOSITION scoreboardhttps://docs.google.com/spreadsheets/d/1jvq_--P35FbqY8Iv_jn3YZ_7iP1I_hR0_vk7DjKsZgI/edit#gid=0

Intel i7 8700k || ASUS Z370-I ITX || AMD Radeon VII || 16GB 4266mhz DDR4 || Silverstone 800W SFX-L || 512GB 950 PRO M.2 + 3.5TB of storage SSD's

SCHIIT Lyr 3 Multibit || HiFiMAN HE-1000 V2 || MrSpeakers Ether C

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Masada02 said:

remember; having those setting greyed out for me for the longest time until I actually got a decent comp.

X99 SLI  beast is a decent computer. Could use Broadwell EP though :D

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/14/2016 at 10:38 AM, Stefan1024 said:

If Polaris 10 has about the performance of a GTX980 but with good DX12 support and 8 GByte and only 110 watts I will buy two instantly.

But it's still a rumour...

I take it they will be passively cooled.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, awesomeness10120 said:

X99 SLI  beast is a decent computer. Could use Broadwell EP though :D

 

I didn't have this back in 2008 when Crysis came out though :D 

LTT Unigine SUPERPOSITION scoreboardhttps://docs.google.com/spreadsheets/d/1jvq_--P35FbqY8Iv_jn3YZ_7iP1I_hR0_vk7DjKsZgI/edit#gid=0

Intel i7 8700k || ASUS Z370-I ITX || AMD Radeon VII || 16GB 4266mhz DDR4 || Silverstone 800W SFX-L || 512GB 950 PRO M.2 + 3.5TB of storage SSD's

SCHIIT Lyr 3 Multibit || HiFiMAN HE-1000 V2 || MrSpeakers Ether C

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Dan Castellaneta said:
  Hide contents

 

 

On topic: I really wonder how true this is gonna be. At least on Polaris 10.

What makes you say Polaris 10 is bullshit? The only questionable part is the TDP really.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×