Jump to content

Polaris “Ellesmere” Has Around 100W TDP + specs - Rumour

Paragon_X
26 minutes ago, Notional said:

AMD hasn't announced any info on Vega yet, so we don't know. However as we go to async compute in DX12 and Vulkan, ROPs simply won't be as important as before.

Vega and Polaris are both GCN 4, so they are actually both TIC and TOC. Vega is later so yields can increase and HBM2 will actually be available.

I just hope Vega is a monster. If nVidia doesn't have Async Compute again, it might be a good time for me to switch to Team Red the next round.

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, VagabondWraith said:

I just hope Vega is a monster. If nVidia doesn't have Async Compute again, it might be a good time for me to switch to Team Red the next round.

Me too. We need less polarity in the market, so we have better competition. And also I want some beastly cards to move thing along. As long as they are affordable.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Tomsen said:

AMD is in an financial crises, and they're releasing products for the market with the most volume. How silly of them, they should instead be investing in the much lower volume of high-end GPUs (volume is in an order of magnitude lower).

Pretty sure the high end desktop GPU market is larger than the mobile discrete GPU market. The absolute majority of laptops, especially the long battery life ones, are exclusively igpu. 

My Build:

Spoiler

CPU: i7 4770k GPU: GTX 780 Direct CUII Motherboard: Asus Maximus VI Hero SSD: 840 EVO 250GB HDD: 2xSeagate 2 TB PSU: EVGA Supernova G2 650W

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Centurius said:

Pretty sure the high end desktop GPU market is larger than the mobile discrete GPU market. The absolute majority of laptops, especially the long battery life ones, are exclusively igpu. 

Why I would love to see some kickass ZEN APU's. Would mean even standard laptops would be able to do gaming on even modern games (albeit medium/low settings). Igpu's is definitely NVidia's bane. They are slowly losing market share to them, and with Intel's license deal with NVidia for 1.5b$ for their igpu's ending, Intel is now in talks with AMD instead.

It's a good way to go for NVidia to focus on cars and AI stuff, but it's also necessary.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, Centurius said:

Pretty sure the high end desktop GPU market is larger than the mobile discrete GPU market. The absolute majority of laptops, especially the long battery life ones, are exclusively igpu. 

Not really. Nearly every laptop has nVidia GTX950M or lower, mid range will have 960M, and "gaming" will go up. Only thin ultrabooks have Intel only or AMD only solution. And when it comes to AMD APUs and similar designs -> consoles and other similar devices, which include VR and AR devices on which AMD is pushing a lot. nVidia is pushing hard with ARM, professional compute or cars as being stuck to "gaming only" is bad for the company as this market is nice, but small. You can check:

Only 7,5 million installs for GTX970/R9 290X and higher GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, Centurius said:

Pretty sure the high end desktop GPU market is larger than the mobile discrete GPU market. The absolute majority of laptops, especially the long battery life ones, are exclusively igpu. 

Now, I wasn't limiting it to mobile discrete GPU market, but more to low-end (anything from laptop to low-end desktop)

Also, plentiful of laptop feature both iGP/dGPU. There are so many mobile SKUs compared to the few high-end SKUs. 

 

30 minutes ago, Notional said:

Igpu's is definitely NVidia's bane. They are slowly losing market share to them, and with Intel's license deal with NVidia for 1.5b$ for their igpu's ending, Intel is now in talks with AMD instead.

It's a good way to go for NVidia to focus on cars and AI stuff, but it's also necessary.

And Nvidia is desperate to find a market to fill out the loss from iGP invading their market. The problem is that high-end desktop GPU alone is not very profitable (very low volume), and need to be supplemented by a higher profit market (like low-end GPU) or HPC.

I also think the $1.5b between Intels-Nvidias cross-license agreement wasn't regarding graphics IP, but more about Intel settling some courtcases (as they were under heavy fire at the time).

 

 

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Enderman said:

the extremely low 100W TDP is a big hint that "huge performance gain" is not what they are going for

Probably it will represent a performance gain within the price point at which they launch it.

But if we are looking for something to leave Fury X and 980ti in the dust we will have to wait for December when they launch Vega GPUs with HBM 2. That should slot in above polaris at a higher price point.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Prysin said:

actually, you are the one who havent been around for the past decade then.

Every shrink, for the last three times, has given a flat increase in performance of 20-40%.

 

It has actually been a lot more than that. If you look at Fermi vs Kepler the full high end 28 nm GK110 in Kepler (GTX 780 Ti) at launch gave 82% better framerates than the full high end 40 nm GF100 in Fermi (GTX 580) at 1920x1080. Or the full midrange 28 nm GK104 in Kepler (GTX 680) at launch smashed the full midrange 40 nm GF114 in Fermi (GTX 560 Ti) by 69% in 1920x1200 framerates. Even comparing midrange GK104 (GTX 680) to highend GF110 (GTX 580) the 680 beat the 580 by 23% at 1920x1200 at launch. (the source is techpowerup for all results)

Link to comment
Share on other sites

Link to post
Share on other sites

BTW, fucking ridiculous that Nvidia can now sell 60Ti level cards at the 80 level, so you're paying $550 for what used to be the $250 performance level.

Link to comment
Share on other sites

Link to post
Share on other sites

Damn that's not much at all, with a decent cooler i can imagine you can get a decent OC out of it.

But with it being new, i doubt it will be a good overclocker.

If you want my attention, quote meh! D: or just stick an @samcool55 in your post :3

Spying on everyone to fight against terrorism is like shooting a mosquito with a cannon

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Prysin said:

actually, you are the one who havent been around for the past decade then.

Every shrink, for the last three times, has given a flat increase in performance of 20-40%.

 

lol you think the 6700k is 40% better than the 4790k?

xD good one

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Enderman said:

lol you think the 6700k is 40% better than the 4790k?

xD good one

Nice, nice... go with CPU die shrinks. Go with the one thing not being discussed in this topic.

 

you should change your name to "strawman".

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Enderman said:

lol you think the 6700k is 40% better than the 4790k?

xD good one

Haswell was a 177mm^2 chip, Skylake is a 133mm^2 chip. No, that is not a 33% difference. Try 77% difference in area.

 

Yet the Skylake chip is very comparable in terms of performance.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

All those cores and all that memory... I seriously doubt it will be less than 150W. 

Since they are aiming to 2.5x perf/W, this would mean Fiji-like performance, and this would mean an almost instant death in the discrete graphics card market. I hope it's not true, I'm really waiting for Polaris 10  to replace my "artificially aging" GTX 670 (thanks nvidia for crippling performance on older cards to make Maxwell look better).

I just want a 4K capable card.

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Agost said:

All those cores and all that memory... I seriously doubt it will be less than 150W. 

Since they are aiming to 2.5x perf/W, this would mean Fiji-like performance, and this would mean an almost instant death in the discrete graphics card market. I hope it's not true, I'm really waiting for Polaris 10  to replace my "artificially aging" GTX 670 (thanks nvidia for crippling performance on older cards to make Maxwell look better).

I just want a 4K capable card.

Seeing as AMD has hyped up the performance to power charts, i suspect they have intentionally clocked down the cards for the express purpose of reducing TDP. The guys who visit forums like these won't care, and will probably just OC it, TDP be damned. 

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MMKing said:

Seeing as AMD has hyped up the performance to power charts, i suspect they have intentionally clocked down the cards for the express purpose of reducing TDP. The guys who visit forums like these won't care, and will probably just OC it, TDP be damned. 


If they designed everything to be extremely power efficient, chances are those cards won't overclock very well

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Agost said:


If they designed everything to be extremely power efficient, chances are those cards won't overclock very well

I'm not claiming miracles, but they will likely OC better than what AMD has released in the past. As AMD has had a record of pushing clock speeds, both on their GPUs and CPUs.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Prysin said:

Nice, nice... go with CPU die shrinks. Go with the one thing not being discussed in this topic.

 

you should change your name to "strawman".

Do you know what the P stands for in GPU and CPU?

go find out

 

hint:  its the same in both

 

and look at the difference between a 6950 and 7950, or 6970 and 7970, it is not a 40% difference

or look at the GTX 600 series vs 700 series, also not a 40% difference

 

11 minutes ago, MMKing said:

Haswell was a 177mm^2 chip, Skylake is a 133mm^2 chip. No, that is not a 33% difference. Try 77% difference in area.

 

Yet the Skylake chip is very comparable in terms of performance.

I'm not talking about performance density. I'm talking about performance.

sure you can have a processor with half the die area and twice the performance per area, it will still not be a performance increase

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

if there is only a 100w tdp and they want to talk about how VR needs more performance. Im gonna laugh...

 

That would be so retarded lol. I mean I like low power consumption and heat output but at the same time im fine with a TDP double that... just give me performance please...

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, MMKing said:

Seeing as AMD has hyped up the performance to power charts, i suspect they have intentionally clocked down the cards for the express purpose of reducing TDP. The guys who visit forums like these won't care, and will probably just OC it, TDP be damned. 

Well that's how Maxwell is tuned. Just underclock it from factory to print a low TDP onto the spec sheet. Use GPU boost to burn through the TDP when not all cores are loaded to show off higth clock. And the people will love you because they can OC the hell out of it (and still think it is efficient but it is not anymore).

 

It's an intelligent strategy, I can't blame Nvidia for doing so. But personally I don't like the PR crap.

Mineral oil and 40 kg aluminium heat sinks are a perfect combination: 73 cores and a Titan X, Twenty Thousand Leagues Under the Oil

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Enderman said:

the extremely low 100W TDP is a big hint that "huge performance gain" is not what they are going for

 

maybe you havent been around for the past decade? have you not yet realized that a die shrink and new architecture =/= huge performance?

If it can run hit-man on dx12 at 1440p at probably 45-55 fps. It puts it at fury level, using 175 watts less.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, ivan134 said:

R9 Fury performance for only 100W? Sounds too good to be true.

R9 Nano is only 175W TDP at 28nm and if they are going to shrink it to 14nm it can easily get a lot more efficient.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't believe that TDP numbers at all for dedicated GPUs. 

Connection200mbps / 12mbps 5Ghz wifi

My baby: CPU - i7-4790, MB - Z97-A, RAM - Corsair Veng. LP 16gb, GPU - MSI GTX 1060, PSU - CXM 600, Storage - Evo 840 120gb, MX100 256gb, WD Blue 1TB, Cooler - Hyper Evo 212, Case - Corsair Carbide 200R, Monitor - Benq  XL2430T 144Hz, Mouse - FinalMouse, Keyboard -K70 RGB, OS - Win 10, Audio - DT990 Pro, Phone - iPhone SE

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×