Jump to content

R9 390X Fiji XT - Extreme Memory Bandwidth & SPs

Paragon_X

Have a citation for that? I know HMC was designed for that, but HBM 1.0 to my knowledge was designed for a dual-channel (2x) 512-bit interface to bring it to market faster.

All the information you need here.

 

I doubt it if they consider it GCN 1.2; AMD calls it GCN 3.0 and people should know that.

and 4096 bit interface?

It's technically called GCN 1.2 as that's the real revision of the architecture, tho people generally call it GCN 3.0 as that's the logical iteration.

 

Should I wait for more info? Planning to build my PC, and have a 970 on it, does this mean that it isn't a wise choice? Or wait until christmas and see what's up with the news and rumors?

The R9 390x will be one beast of a card, tho it will come with a cost. You can buy 2x GTX 970's for likely what it will cost for a single R9 390x.

Link to comment
Share on other sites

Link to post
Share on other sites

All the information you need here.

 

It's technically called GCN 1.2 as that's the real revision of the architecture, tho people generally call it GCN 3.0 as that's the logical iteration.

 

The R9 390x will be one beast of a card, tho it will come with a cost. You can buy 2x GTX 970's for likely what it will cost for a single R9 390x.

http://cdn.overclock.net/9/95/95862d91_3af69f4c-b1e4-491d-9074-857d51412eb6.jpeg Technically GCN 2 was next really named. but it should be correctly 3.0 GCN now.

Computing enthusiast. 
I use to be able to input a cheat code now I've got to input a credit card - Total Biscuit
 

Link to comment
Share on other sites

Link to post
Share on other sites

well, good thing i didnt jump on the gtx970 hype train

i sadly did.... and went against my tradition of red team one year and green team the other.  :(

Space Journal #1: So Apparently i  was dropped on the moon like i'm a mars rover, in a matter of hours i have found the transformers on the dark side of the moon. Turns out its not that dark since dem robots are filled with lights, i waved hi to the Russians on the space station, turns out all those stories about space finding humans instead of the other way around is true(soviet Russia joke). They threw me some Heineken beer and I've been sitting staring at the people of this forum and earth since. 

Link to comment
Share on other sites

Link to post
Share on other sites

All the information you need here.

 

It's technically called GCN 1.2 as that's the real revision of the architecture, tho people generally call it GCN 3.0 as that's the logical iteration.

 

The R9 390x will be one beast of a card, tho it will come with a cost. You can buy 2x GTX 970's for likely what it will cost for a single R9 390x.

I understand, but since it comes with past GDDR5 interface, wouldnt it blow the 2x 970's out of the water? Even if it means that the price is less than a single 390x?

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly, the R9 290 that I currently run somewhat disappoints me. While Nvidia are bringing out new technologies on a regular basis while giving us drivers that don't suck. Before you say "Mantle" I have used it and the difference in performance is barely noticeable if it improves, the rest of the time performance is actually worse.

Nvidia have shadowplay and gamestream while AMD have bloatware such as their gaming evolved app. Also, Nvidia GPUs tend to be much more efficient and a lot quieter. I know that depends on the cooler but the fan wouldn't have to run on such a high percentage if it didn't get so hot.

However, I will stick with my R9 290 for the five or so years it will last me. Unless that's what caused my two recent "IRQL_not_or_less_equal" BSODs. Then I have an excuse to replace it with a GTX 970.

tl;dr make a damn fucking good case for this one AMD. I have lost faith in you.

Nvidia Geforce experience is okay but AMD Gaming Evolved(which you can get free games/pc parts in, by just having it on while playing games) is bloatware...

And mantle is only noticeably good for when CPU is limiting the performance.

Anyone who has a sister hates the fact that his sister isn't Kasugano Sora.
Anyone who does not have a sister hates the fact that Kasugano Sora isn't his sister.
I'm not insulting anyone; I'm just being condescending. There is a difference, you see...

Link to comment
Share on other sites

Link to post
Share on other sites

I understand, but since it comes with past GDDR5 interface, wouldnt it blow the 2x 970's out of the water? Even if it means that the price is less than a single 390x?

If the leaked specifications are true, a single R9 390x should easily compete with SLI 970's. It's the equivalent of AMD smashing two R9 280x into a single die. With improved GCN architecture and massive amounts of memory bandwidth. The GTX 970 beats the R9 280x no doubt, tho I doubt SLI GTX 970 can scale as well as a single card pushing more performance than two R9 280x.

Link to comment
Share on other sites

Link to post
Share on other sites

I understand, but since it comes with past GDDR5 interface, wouldnt it blow the 2x 970's out of the water? Even if it means that the price is less than a single 390x?

Only if you had a game which saturated GDDR5 in the first place. None do. Scientific computing is the only current workload with that distinction. For games it's a combination of bad coding, compute limits of given cards, and no elegant solutions for race conditions. They all use spin locks which force GPU core usage up to 100% until they're resolved.

 

If you leave a while(true) running in your code, it will force CPU usage up to 100% on a given core. They're a really lazy way to handle race conditions. I wish we had more semaphore and deferred action solutions.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

If the leaked specifications are true, a single R9 390x should easily compete with SLI 970's. It's the equivalent of AMD smashing two R9 280x into a single die. With improved GCN architecture and massive amounts of memory bandwidth. The GTX 970 beats the R9 280x no doubt, tho I doubt SLI GTX 970 can scale as well as a single card pushing more performance than two R9 280x.

So i would like youre opinion on this. This black friday is coming up, Im prob going to buy my CPU/MB and ram and all that if i get a good deal, however i can hold back on my GPU and last on integrated for a while. Should i skip this black friday and wait until christmas for updated news and leaks and hopefully a price drop? Or should i hop on right now and forget about the 390x.

Link to comment
Share on other sites

Link to post
Share on other sites

Only if you had a game which saturated GDDR5 in the first place. None do. Scientific computing is the only current workload with that distinction. For games it's a combination of bad coding, compute limits of given cards, and no elegant solutions for race conditions. They all use spin locks which force GPU core usage up to 100% until they're resolved.

 

If you leave a while(true) running in your code, it will force CPU usage up to 100% on a given core. They're a really lazy way to handle race conditions. I wish we had more semaphore and deferred action solutions.

So basically even if its true, no game at the moment can use all that much memory. But what about your opinions on this?

 

This black friday is coming up, Im prob going to buy my CPU/MB and ram and all that if i get a good deal, however i can hold back on my GPU and last on integrated for a while. Should i skip this black friday and wait until christmas for updated news and leaks and hopefully a price drop? Or should i hop on right now and forget about the 390x.

Link to comment
Share on other sites

Link to post
Share on other sites

So basically even if its true, no game at the moment can use all that much memory. But what about your opinions on this?

 

This black friday is coming up, Im prob going to buy my CPU/MB and ram and all that if i get a good deal, however i can hold back on my GPU and last on integrated for a while. Should i skip this black friday and wait until christmas for updated news and leaks and hopefully a price drop? Or should i hop on right now and forget about the 390x.

It's not the amount of memory that matters. It's how many IO operations per second there are in reference to that memory. Most games don't require a single core or working group use enough of the global data set (the 2,3,4,8 GB of RAM on board) for it to matter. There are a few reallocations of global memory to local (a pool not in the videocard's ram but shared between a group of cores similar to a last-level cache) and swaps between them, but if you're doing this often, you built it incorrectly. Then there is private memory which only a single core can access. If you properly manage your video memory, for most applications you shouldn't be touching global memory super often.

 

Scientific computing does this because a humongous dataset has to be manipulated at once, in chunks, so passing between global and local memory happens quite often.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

So i would like youre opinion on this. This black friday is coming up, Im prob going to buy my CPU/MB and ram and all that if i get a good deal, however i can hold back on my GPU and last on integrated for a while. Should i skip this black friday and wait until christmas for updated news and leaks and hopefully a price drop? Or should i hop on right now and forget about the 390x.

A single GTX 970 right now will do pretty much whatever you ask of it (especially once overclocked). I wouldn't hesitate to buy a GTX 970 for the time being (especially if you can get one for cheap on black Friday) and possibly upgrade to a R9 390x in the future if that seems like a viable option (after you see it perform).

 

Only if you had a game which saturated GDDR5 in the first place. None do. Scientific computing is the only current workload with that distinction. For games it's a combination of bad coding, compute limits of given cards, and no elegant solutions for race conditions. They all use spin locks which force GPU core usage up to 100% until they're resolved.

 

If you leave a while(true) running in your code, it will force CPU usage up to 100% on a given core. They're a really lazy way to handle race conditions. I wish we had more semaphore and deferred action solutions.

GDDR5 bandwidth becomes saturated in almost every AAA title. Why do you think not only AMD but also Nvidia are migrating over to HBM technology. If you overclock the memory frequency on your card I can guarantee you will see some degree of FPS gain (full utilization). This is because of the obvious, the memory is far too slow for keeping up with the I/O the GPU streams require. The R9 390x has 640 GB/s of memory bandwidth essentially what is needed to drive a GPU that is furthermore the equivalent to two R9 280x. The amount of bandwidth needed for these "TITAN" class cards is not easily feasible with GDDR5 simply because you would have to use a whole shit load of it (take a look at how many chips are on the TITAN Z for a measly 384-bit bus @ 336 GB/s per GPU).

 

GF_GTX_Titan_Z_PCB.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

A single GTX 970 right now will do pretty much whatever you ask of it (especially once overclocked). I wouldn't hesitate to buy a GTX 970 for the time being (especially if you can get one for cheap on black Friday) and possibly upgrade to a R9 390x in the future if that seems like a viable option (after you see it perform).

 

GDDR5 bandwidth becomes saturated in almost every AAA title. Why do you think not only AMD but also Nvidia are migrating over to HBM technology. If you overclock the memory frequency on your card I can guarantee you will see some degree of FPS gain (full utilization). This is because of the obvious, the memory is far too slow for keeping up with the I/O the GPU streams require. The R9 390x has 640 GB/s of memory bandwidth essentially what is needed to drive a GPU that is furthermore the equivalent to two R9 280x. The amount of bandwidth needed for these "TITAN" class cards is not easily feasible with GDDR5 simply because you would have to use a whole shit load of it (take a look at how many chips are on the TITAN Z for a measly 384-bit bus @ 336 GB/s per GPU).

 

GF_GTX_Titan_Z_PCB.jpg

I see. thanks for the help. And last question, if sli'ing do you believe its absolutely necessary for both 970's to be references? Or i can go a msi 970 4g or Gigabyte g1 970 and a reference on bottom? SLI in bitfenix prodigy M .

Link to comment
Share on other sites

Link to post
Share on other sites

A single GTX 970 right now will do pretty much whatever you ask of it (especially once overclocked). I wouldn't hesitate to buy a GTX 970 for the time being (especially if you can get one for cheap on black Friday) and possibly upgrade to a R9 390x in the future if that seems like a viable option (after you see it perform).

 

GDDR5 bandwidth becomes saturated in almost every AAA title. Why do you think not only AMD but also Nvidia are migrating over to HBM technology. If you overclock the memory frequency on your card I can guarantee you will see some degree of FPS gain (full utilization). This is because of the obvious, the memory is far too slow for keeping up with the I/O the GPU streams require. The R9 390x has 640 GB/s of memory bandwidth essentially what is needed to drive a GPU that is furthermore the equivalent to two R9 280x. The amount of bandwidth needed for these "TITAN" class cards is not easily feasible with GDDR5 simply because you would have to use a whole shit load of it (take a look at how many chips are on the TITAN Z for a measly 384-bit bus @ 336 GB/s per GPU).

AMD and Nvidia aren't migrating for gamers. Any AAA title doing that was coded by a lunatic. Pretty much only big data applications require this sort of global memory IO.

 

Correction: global illumination has started doing that because it does affect a huge data set. I was PARTLY wrong. Go to any game before global illumination and it just doesn't happen.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

ok, if its true it sound very cool..but surely the "greatness" is gonna depend on the actually Ghz speed of the memory as if this is slow your going to get 'less updates' per second although each 'update' would contain way more data?

 

 

Hi!  Please quote me in replies otherwise I may not see your response!

Link to comment
Share on other sites

Link to post
Share on other sites

I see. thanks for the help. And last question, if sli'ing do you believe its absolutely necessary for both 970's to be references? Or i can go a msi 970 4g or Gigabyte g1 970 and a reference on bottom? SLI in bitfenix prodigy M .

 

you can put any brand as long they're of the same Model..

Asus GTX 780 - EVGA GTX 780 or any card as long they're the same card model

Mobo: Asrock Z77 Extreme 6 CPU: Intel Core i7 3770k @ 4.3ghz GPU: Asus GTX 770 Direct CU II SLI RAM: Corsair Dominator Platinums 4x4g 1866 PSU: Seasonic M12II 850w Bronze Storage: Samsung 840 Pro 256g x 2 (Raid 0) - WDC Black 1tb Case: Corsair 500R

Link to comment
Share on other sites

Link to post
Share on other sites

AMD and Nvidia aren't migrating for gamers. Any AAA title doing that was coded by a lunatic. Pretty much only big data applications require this sort of global memory IO.

 

Correction: global illumination has started doing that because it does affect a huge data set. I was PARTLY wrong. Go to any game before global illumination and it just doesn't happen.

They are migrating because GDDR5 is getting to the point where you have to use an absurd amount of it to provide enough bandwidth for the GPU. Like in my previous post, they had to use 24 modules of GDDR5 on the TITAN Z to provide enough bandwidth for the two GPU's. They could of done this with 4 stacks of HBM which would of been more cost efficient, far less power consumption, and provide higher bandwidth.

Link to comment
Share on other sites

Link to post
Share on other sites

They are migrating because GDDR5 is getting to the point where you have to use an absurd amount of it to provide enough bandwidth for the GPU. Like in my previous post, they had to use 24 modules of GDDR5 on the TITAN Z to provide enough bandwidth for the two GPU's. They could of done this with 4 stacks of HBM which would of been more cost efficient, far less power consumption, and provide higher bandwidth.

6GB is not an absurd amount. Since it's 12 blocks per core for 6GB, let's look at the 290X with 4GB or the 295x2 with 8GB and see if we see 8 and 16 blocks respectively.

 

Also, could have* Finally, let's see what the actual cost of HBM is vs. GDDR which is dirt cheap at only $3.20 per GB.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia Geforce experience is okay but AMD Gaming Evolved(which you can get free games/pc parts in, by just having it on while playing games) is bloatware...

And mantle is only noticeably good for when CPU is limiting the performance.

how is gaming evolved bloatware....

Link to comment
Share on other sites

Link to post
Share on other sites

6GB is not an absurd amount. Since it's 12 blocks per core for 6GB, let's look at the 290X with 4GB or the 295x2 with 8GB and see if we see 8 and 16 blocks respectively.

 

Also, could have* Finally, let's see what the actual cost of HBM is vs. GDDR which is dirt cheap at only $3.20 per GB.

The point is that consumer grade gaming cards are needing higher capacities of memory along with bandwidth. The only way this is possible with GDDR5 is stacking an absurd amount of modules on the PCB. The TITAN Z uses 24 modules not because of density requirements but because of bandwidth requirements. Without all of them modules each GPU would be crippled by memory bandwidth. GDDR5 technology has finally hit its falling off point where it's no longer relevant to the future of consumer hardware. HBM has taken it's place and serves as a huge benefit not only in the consumer market but also in the server market.

Link to comment
Share on other sites

Link to post
Share on other sites

This topic has gotten me all woozy.  A little hot and bothered.

 

AMD I believe.

Link to comment
Share on other sites

Link to post
Share on other sites

The point is that consumer grade gaming cards are needing higher capacities of memory along with bandwidth. The only way this is possible with GDDR5 is stacking an absurd amount of modules on the PCB. The TITAN Z uses 24 modules not because of density requirements but because of bandwidth requirements. Without all of them modules each GPU would be crippled by memory bandwidth. GDDR5 technology has finally hit its falling off point where it's no longer relevant to the future of consumer hardware. HBM has taken it's place and serves as a huge benefit not only in the consumer market but also in the server market.

Until HBM is close enough in price parity, I wouldn't count those chickens. 3 more years at least for adoption beyond the highest end.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Until HBM is close enough in price parity, I wouldn't count those chickens. 3 more years at least for adoption beyond the highest end.

The entire R9 300 series is suppose to use HBM, so from the looks of it GDDR5 is done with on AMD based cards (good riddance).

 

Also keep in mind AMD isn't outsourcing their HBM, they developed it in partner with Hynix which gives them an advantage on low cost.

Link to comment
Share on other sites

Link to post
Share on other sites

how is gaming evolved bloatware....

It's not...

Anyone who has a sister hates the fact that his sister isn't Kasugano Sora.
Anyone who does not have a sister hates the fact that Kasugano Sora isn't his sister.
I'm not insulting anyone; I'm just being condescending. There is a difference, you see...

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×