Jump to content

PowerColor Releases Ad for Upcoming AMD GPU

nunya bus

like for example lowering quality settings?! then why would you buy a top of the line video card, for presumably 850$, when there's the Titan X for just 150$ more?!? and doesn't have limited VRAM to work with

Do we know the official pricing yet?

 

Uh, no.

 

EDIT: Don't forget about the increased bandwidth HBM is going to bring to the table. High bandwidth is extremely important and 4K.

Link to comment
Share on other sites

Link to post
Share on other sites

What he's implying is a game that uses 6GB on the TITAN X can still perform better running on the 4GB Fiji. Part of being a programmer is taking an advantage of the resources that you have at your disposal.

yes, but entirely accurate

you see, if you have to differentiate resource allocation, some users will get shafted with lower quality settings - and this should not even be a point of discussion when talking to top of the line graphics cards

Link to comment
Share on other sites

Link to post
Share on other sites

EDIT: Don't forget about the increased bandwidth HBM is going to bring to the table. High bandwidth is extremely important and 4K.

no

before you get limited by the HBM memory bandwidth you get limited by driver overhead, CPU usage, and other "stuff" - textures don't just materialize in the VRAM, and then comes into play processing, and that's done at the speed of the GPU itself, not at the memory's bandwidth (yes, the VRAM bandwidth plays a role, but hasn't hindered Titan X)

 

HBM is not a magic item that will boost video card performance to new levels, not if the GPU wasn't designed ground up to take complete advantage of it - and Fiji is just the 1st step

 

 

the alleged benchmarks show Fiji trading blows with Titan X, with GDDR5 - what that means is Fiji by itself is not very efficient in taking full advantage of HBM's bandwidth; the only thing HBM does for Fiji is to bring it on par or on slight higher step than Titan X is on right now

Link to comment
Share on other sites

Link to post
Share on other sites

no

before you get limited by the HBM memory bandwidth you get limited by driver overhead, CPU usage, and other "stuff" - textures don't just materialize in the VRAM, and then comes into play processing, and that's done at the speed of the GPU itself, not at the memory's bandwidth (yes, the VRAM bandwidth plays a role, but hasn't hindered Titan X)

 

HBM is not a magic item that will boost video card performance to new levels, not if the GPU wasn't designed ground up to take complete advantage of it - and Fiji is just the 1st step

 

 

the alleged benchmarks show Fiji trading blows with Titan X, with GDDR5 - what that means is Fiji by itself is not very efficient in taking full advantage of HBM's bandwidth; the only thing HBM does for Fiji is to bring it on par or on slight higher step than Titan X is on right now

Again, I wouldn't try to make performance predictions before the card is released or EVEN BEEPIN OFFICIALLY ANNOUNCED...

Link to comment
Share on other sites

Link to post
Share on other sites

i remember you all shouting that it doesnt matter because even when it exceeded 3.5GB there isnt a difference

Well it depends. I don't know that much about so I'm not going to go into it, but I think @zappian knows why.

Also please don't include me in a group that I was not in.

- snip-

Link to comment
Share on other sites

Link to post
Share on other sites

I just want a single card that can run every game available 1440p at ultra smoothery 60 fps all the time... not interested in 4k at the moment.

Titan X

Link to comment
Share on other sites

Link to post
Share on other sites

Titan X

And it can manage 4K 60FPs as well......

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, so you can bottleneck it with that 8350 rofl.

 ITS CALLED DX12 DUMBSHIT. Wont be a bottleneck for long... dx12 is right around the corner

• FX-8320  GTX 970  M5A97 R2  Corsair H100I GT  500W PSU  RIPJAWS 8GB DDR3  SAMSUNG 840 EVO 120GB  1TB HDD 

 
Link to comment
Share on other sites

Link to post
Share on other sites

 ITS CALLED DX12 DUMBSHIT. Wont be a bottleneck for long... dx12 is right around the corner

Chill.

 

DX12 is right around the corner but it will not be mainstream for a while.

 

Look how many DX9 games were still coming out well into DX11s lifetime.

 

Don't go round talking to people like that if you want to last very long...

Link to comment
Share on other sites

Link to post
Share on other sites

Titan X

 

You know what? You're right, I'm sorry I spelled wrong my words, I mean affordable card like a 980 or 290X.

●CPU: i7-4790K w/H100i ●Mobo: MSI Z97 MPower ●RAM: Corsair 16GB Dominator ●GPU: EVGA ACX SC 780 3GB(X2) ●SSD: 850 Pro 256GB ●Case: 450D ●PSU: AX 860i ●Monitor: Asus PB278Q 1440p

Link to comment
Share on other sites

Link to post
Share on other sites

 ITS CALLED DX12 DUMBSHIT. Wont be a bottleneck for long... dx12 is right around the corner

An FX 8350 would still bottleneck in current games as they aren't DX 12. And people forget that devs have to set DX 12 to reduce the overhead.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

You know what? You're right, I'm sorry I spelled wrong my words, I mean affordable card like a 980 or 290X.

R9 295x2 stomps both the R9 290X and GTX 980 for what it costs currently for a GTX 980.

Link to comment
Share on other sites

Link to post
Share on other sites

OK, link is dead and so is the facebook reference, at least on my end.

FX-8120 | ASUS Crosshair V Formula | G.Skill Sniper 8GB DDR3-1866 CL9 | Club3D Radeon R9 290 RoyalAce |Thermaltake Chaser MkIII | 128GB M4 Crucial + 2TB HDD storage | Cooler Master 850M | Scythe Mugen 3 | Corsair Strafe RGB | Logitech G500

Link to comment
Share on other sites

Link to post
Share on other sites

Guys... 4GB is fine for 4k, sure you can always use more. But the real thing for VRAM is how quickly it can be loaded and used and unloaded when the texture is no longer needed. I don't know why we have come back to this "more GB mean it is a better card" mentality of the early/mid 2000's (hmmm, do I get a 6800GT with 128mb of vram or this 7300GT with 256mb? Answer: the 6800GT is the better card). I would much rather 4GB @ ~600GBPS versus 8GB @ ~300GBPS.

 

There is maybe 1 or two games that need more then 4GB @ 4k, and all information points to an 8GB version of the 390 being release later. Please don't make a big deal of somthing that clearly isn't a big deal.

Link to comment
Share on other sites

Link to post
Share on other sites

The add is vague at best. It seems like it's aimed at 4k, but it's putting it in a negative light, which I don't understand. Odd.

All publicity is good publicity. That's why.

Link to comment
Share on other sites

Link to post
Share on other sites

And think about it. the 290 is for MAxed out 1080P gaming. dual screen setups. it was tested at 1080P and ran most things at around 100 FPS a good amount. it does higher resolutions, but you have to start turning things off. 

Skip to this generation and as I said Asus ROG and Predator ACER's 1440P IPS fast monitors have just been released. means this card is a 1440P gaming all maxed out Card. Next year HBM v2 comes. 8GB. we will see this card do 4K at around 40-50FPS (respectable) and Nvidia release Pascal with HBM v2. that will be direct comparison 

The 290 gets 40-50fps @ 4k now, I know this because I have 2x 290 and I normally game in windowed mode (no crossfire) @ 4k. We are only just now getting more then 1 game where this is not the case. Also go look at launch reviews @ 4k. Most of the reviews show the 290x to be around 40-50fps on most games.

 

Also, the 300 cards are competing with the 900 card, not the next gen or the gen after that. Every damn time it comes to which card series is competing against which, it seems that people say it is competing with the next gen nVidia, which is always 12+months away. No, the 390x's cometition is the 980ti or the titan black/x NOT pascal.

Link to comment
Share on other sites

Link to post
Share on other sites

4GB is even cutting it close for Dying Light or The Witcher 3 1080p, if they're marketing it for 4K they are a special level of idiot.

 

I feel sorry for AMD, having drawn the short straw for board partners on the GPU side. Seems a generation won't pass unless one or more of them have a massive cock-up.

 

Radeon 7xxx series: XFX coil whine from hell, Sapphire 7850 huge RMA rate

R7/ R9 2xx series: ASUS VRAM corruption right out of the box on 50% of samples of "280X top" SKU, and MSi Gaming fan bearings leaking fluid

R9 3xx series: Powercolour be all like "I got this AMD, send your marketing team on holiday - A HERP DE DERP DA DOO"

4gb of VRAM is fine for the witcher 3. I can run at a buttery smooth 80fps @ 4k @ ultra with dual 290. I don't do this as the screen flickers due to there being no crossfire profile currently.

Link to comment
Share on other sites

Link to post
Share on other sites

R9 295x2 stomps both the R9 290X and GTX 980 for what it costs currently for a GTX 980.

 

Yep, but no :(, I want a single card. :)

●CPU: i7-4790K w/H100i ●Mobo: MSI Z97 MPower ●RAM: Corsair 16GB Dominator ●GPU: EVGA ACX SC 780 3GB(X2) ●SSD: 850 Pro 256GB ●Case: 450D ●PSU: AX 860i ●Monitor: Asus PB278Q 1440p

Link to comment
Share on other sites

Link to post
Share on other sites

Yep, but no :(, I want a single card. :)

You'll have to opt out for a GTX 980 or R9 290X as there's nothing stronger than them at that price point. The 980 Ti will be up there with the TITAN X as will Fiji. So personally if you're not looking to go multiple GPU in the future and just want the best performance that you can get right now for your dollar the R9 295x2 is the ticket. It's marginally faster than the GTX 980 while also being a single card (dual Hawaii). It even runs cooler than the GTX 980 but pulls a hell of a lot of more power (not a big issue as it is a dual GPU card so 2x the power is expected). That's where I'd spend my money if I was looking for an upgrade and only plan on running a single card.

Link to comment
Share on other sites

Link to post
Share on other sites

 ITS CALLED DX12 DUMBSHIT. Wont be a bottleneck for long... dx12 is right around the corner

DX12 doesn't make bad CPUs good. Stop dreaming.

i7 6700K - ASUS Maximus VIII Ranger - Corsair H110i GT CPU Cooler - EVGA GTX 980 Ti ACX2.0+ SC+ - 16GB Corsair Vengeance LPX 3000MHz - Samsung 850 EVO 500GB - AX760i - Corsair 450D - XB270HU G-Sync Monitor

i7 3770K - H110 Corsair CPU Cooler - ASUS P8Z77 V-PRO - GTX 980 Reference - 16GB HyperX Beast 1600MHz - Intel 240GB SSD - HX750i - Corsair 750D - XB270HU G-Sync Monitor
Link to comment
Share on other sites

Link to post
Share on other sites

DX12 doesn't make bad CPUs good. Stop dreaming.

it removes cpu overhead so cpus are not used as much in games

Link to comment
Share on other sites

Link to post
Share on other sites

it removes cpu overhead so cpus are not used as much in games

And that's if devs actually set DX 12 to remove the overhead. And BTW, CPU's that are bad for games such as the old FX 8350 will still keep on bottlenecking graphics cards. Also, all CPU's would see some sort of benefits, so either way DX 12 isn't actually going to have that big an impact.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

And that's if devs actually set DX 12 to remove the overhead. And BTW, CPU's that are bad for games such as the old FX 8350 will still keep on bottlenecking graphics cards. Also, all CPU's would see some sort of benefits, so either way DX 12 isn't actually going to have that big an impact.

DirectX 12 should do a much better job at keeping graphics cards fed. Keep in mind single thread performance will no longer be as mission critical. With up to six rendering threads all communicating directly with the GPU chips like the FX-6300 should do a much better job than what they're currently doing when it comes to feeding the GPU. It may end up being enough for the FX-6300 to be capable of feeding both the R9 290X and GTX 980 without a hassle. The need to pipe resources back to the main thread in order to get pushed through the graphics stack is what is currently crippling AMD chips. The main thread is where most of the game logic happens as well so having other threads that can talk directly with the GPU may be a helping solution to the problem.

Link to comment
Share on other sites

Link to post
Share on other sites

it removes cpu overhead so cpus are not used as much in games

It actually reduces CPU overhead, it doesn't remove it. It doesn't mean they're not used at all. GPUs are not going to completely overtake CPUs and CPUs will still be required for games. Good CPUs will still matter and there will still be bottlenecks.

i7 6700K - ASUS Maximus VIII Ranger - Corsair H110i GT CPU Cooler - EVGA GTX 980 Ti ACX2.0+ SC+ - 16GB Corsair Vengeance LPX 3000MHz - Samsung 850 EVO 500GB - AX760i - Corsair 450D - XB270HU G-Sync Monitor

i7 3770K - H110 Corsair CPU Cooler - ASUS P8Z77 V-PRO - GTX 980 Reference - 16GB HyperX Beast 1600MHz - Intel 240GB SSD - HX750i - Corsair 750D - XB270HU G-Sync Monitor
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×