Jump to content

R7 370 Pictured. <150 watt tdp confirmed

marldorthegreat

Nice looking reference cooler, from what I've seen so far. Hopefully the gaudy plastic pieces of shit that were last gen are just going to be a bad memory

Daily Driver:

Case: Red Prodigy CPU: i5 3570K @ 4.3 GHZ GPU: Powercolor PCS+ 290x @1100 mhz MOBO: Asus P8Z77-I CPU Cooler: NZXT x40 RAM: 8GB 2133mhz AMD Gamer series Storage: A 1TB WD Blue, a 500GB WD Blue, a Samsung 840 EVO 250GB

Link to comment
Share on other sites

Link to post
Share on other sites

Soooo it's a 7870... ;)

Depends right, r9 270/x are both 7850/7870? :D

Link to comment
Share on other sites

Link to post
Share on other sites

Depends right, r9 270/x are both 7850/7870? :D

Slightly down clocked 7870 (for the R9 270 @ 150W) the R7 265 was the 7850 I think.

Link to comment
Share on other sites

Link to post
Share on other sites

Slightly down clocked 7870 (for the R9 270 @ 150W) the R7 265 was the 7850 I think.

Still a decent card, if dx 12 really does stack VRAM I might actually grab another R9 270 or R7 370 and crossfire them. :D

Link to comment
Share on other sites

Link to post
Share on other sites

Still a decent card, if dx 12 really does stack VRAM I might actually grab another R9 270 or R7 370 and crossfire them. :D

For sure, at the end of the day its a price game :)

Link to comment
Share on other sites

Link to post
Share on other sites

More like 280 from what I've heard all the 2xx series cards besides the "fury" and 390x are just going down one 10 from where they were. So 270~360, 280~370, and so on.

Your rumors are outdated by like 6 months. I mean, why would they rebrand Tahiti if they have Tonga? 

Link to comment
Share on other sites

Link to post
Share on other sites

Still a decent card, if dx 12 really does stack VRAM I might actually grab another R9 270 or R7 370 and crossfire them. :D

 

Problem with the stacking vram is that it appears developers will have to implement that feature. How many times have developers implemented the features that are supposed to be beneficial? You know like proper multi-threading.

Spoiler

Cpu: Ryzen 9 3900X – Motherboard: Gigabyte X570 Aorus Pro Wifi  – RAM: 4 x 16 GB G. Skill Trident Z @ 3200mhz- GPU: ASUS  Strix Geforce GTX 1080ti– Case: Phankteks Enthoo Pro M – Storage: 500GB Samsung 960 Evo, 1TB Intel 800p, Samsung 850 Evo 500GB & WD Blue 1 TB PSU: EVGA 1000P2– Display(s): ASUS PB238Q, AOC 4k, Korean 1440p 144hz Monitor - Cooling: NH-U12S, 2 gentle typhoons and 3 noiseblocker eloops – Keyboard: Corsair K95 Platinum RGB Mouse: G502 Rgb & G Pro Wireless– Sound: Logitech z623 & AKG K240

Link to comment
Share on other sites

Link to post
Share on other sites

Slightly down clocked 7870 (for the R9 270 @ 150W) the R7 265 was the 7850 I think.

 

No, the R7 265 was not the same gen as the 7850.  It was added to the lineup later. It might be a tonga, I can't remember... like the 285.

Link to comment
Share on other sites

Link to post
Share on other sites

Problem with the stacking vram is that it appears developers will have to implement that feature. How many times have developers implemented the features that are supposed to be beneficial? You know like proper multi-threading.

Oh... that's a bummer... Well they will have to implement them in AAA titles soon enough, but honestly it might be another year or two until dx 12 is actually a standard for gaming. Adopting dx 11 took it's time. I hope I'm wrong though and things move along much faster. :D

Link to comment
Share on other sites

Link to post
Share on other sites

Oh... that's a bummer... Well they will have to implement them in AAA titles soon enough, but honestly it might be another year or two until dx 12 is actually a standard for gaming. Adopting dx 11 took it's time. I hope I'm wrong though and things move along much faster. :D

Mantle gave a nice push getting engines ready for DX12, at least good part of it namely Low Overhead, Multi GPU, etc features native to Mantle.

They had a closed beta with 100+ developers, with alot of major engines.

So you can say the adoption of DX12 started in early 2014.

 

Then you'll have the DX11.3 features, in DX12.

It will probably be a very swift adoption, with great traction IMO - it will depend on Microsoft action for W10 adoption as well.

Link to comment
Share on other sites

Link to post
Share on other sites

And here's me actually try to upgrade mybHD7850, it's like AMD doesn't want me to have a better card xD.

Just get a used GTX 770 4 Gb (Which I did) ...they are getting low in price now...or take the 960 leap instead. Both blow away the 7850, I'd know because I have one sitting in my closet.

 

I think I'm holding on to this until Pascal.....  Either that or buy a used 980 Ti when the Pascal comes out and still be happy....who knows.

Link to comment
Share on other sites

Link to post
Share on other sites

guys remember that this performance metric was done with a high end cpu, with this kind of card the cpu is going to be more budget orientated, which means the r9 370 will probably be a lot less then 20% better.

Link to comment
Share on other sites

Link to post
Share on other sites

Problem with the stacking vram is that it appears developers will have to implement that feature. How many times have developers implemented the features that are supposed to be beneficial? You know like proper multi-threading.

 

Think more about engines like Unreal. Where there are a lot of games using it.

But for technical reasons (PIC-E bandwidth) I think you still have to hold most of the data on all GPUs, at last for game workloads.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh wow it's 20% faster than a year and a half old gpu. :|

Well technically the Pitcairn core came out in 2012, so you could consider it faster than a year and a half newer GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

Think more about engines like Unreal. Where there are a lot of games using it.

But for technical reasons (PIC-E bandwidth) I think you still have to hold most of the data on all GPUs, at last for game workloads.

 

Yeah I'm aware of that too, only time will tell remember good programmers are like magicians. It's a new feature and there will be issues, but I expect DICE and Crytek to be the first if not one of the first two studios to implement it.

 

There are only 4 big games right now that's lined up for UE4, that's Kingdom Hearts III (Console), Tekken 7 (Console), Street Fighter V, and Unreal Tournament. Honestly I have little faith that Square Enix, Capcom, or Namco-Bandai will introduce it. We may not even see it Unreal Tournament either but there is a better change of getting it there. Nevermind Indies and small studios using that feature.

Spoiler

Cpu: Ryzen 9 3900X – Motherboard: Gigabyte X570 Aorus Pro Wifi  – RAM: 4 x 16 GB G. Skill Trident Z @ 3200mhz- GPU: ASUS  Strix Geforce GTX 1080ti– Case: Phankteks Enthoo Pro M – Storage: 500GB Samsung 960 Evo, 1TB Intel 800p, Samsung 850 Evo 500GB & WD Blue 1 TB PSU: EVGA 1000P2– Display(s): ASUS PB238Q, AOC 4k, Korean 1440p 144hz Monitor - Cooling: NH-U12S, 2 gentle typhoons and 3 noiseblocker eloops – Keyboard: Corsair K95 Platinum RGB Mouse: G502 Rgb & G Pro Wireless– Sound: Logitech z623 & AKG K240

Link to comment
Share on other sites

Link to post
Share on other sites

Oh... that's a bummer... Well they will have to implement them in AAA titles soon enough, but honestly it might be another year or two until dx 12 is actually a standard for gaming. Adopting dx 11 took it's time. I hope I'm wrong though and things move along much faster. :D

I definitely believe DX 12 will see quicker adoption due to the numerous factors (some of my reasons are a bit of speculation),

 

Free windows 10 upgrade (biggest reason)

 

AMD's proof of concept with Mantle

 

Features that save developers time and possibly shortens code (if any new api makes things easier devs will jump all on it peopole are lazy)

 

There just seems to be a big more effort coming from all sides, consumers, developers (eh not so much), and Microsoft (we will see I have some faith but I don't believe we care about PC Gaming crap).

 

Granted I could be wrong, but it's all a waiting game.

Spoiler

Cpu: Ryzen 9 3900X – Motherboard: Gigabyte X570 Aorus Pro Wifi  – RAM: 4 x 16 GB G. Skill Trident Z @ 3200mhz- GPU: ASUS  Strix Geforce GTX 1080ti– Case: Phankteks Enthoo Pro M – Storage: 500GB Samsung 960 Evo, 1TB Intel 800p, Samsung 850 Evo 500GB & WD Blue 1 TB PSU: EVGA 1000P2– Display(s): ASUS PB238Q, AOC 4k, Korean 1440p 144hz Monitor - Cooling: NH-U12S, 2 gentle typhoons and 3 noiseblocker eloops – Keyboard: Corsair K95 Platinum RGB Mouse: G502 Rgb & G Pro Wireless– Sound: Logitech z623 & AKG K240

Link to comment
Share on other sites

Link to post
Share on other sites

Wow another rebrand.

  ﷲ   Muslim Member  ﷲ

KennyS and ScreaM are my role models in CSGO.

CPU: i3-4130 Motherboard: Gigabyte H81M-S2PH RAM: 8GB Kingston hyperx fury HDD: WD caviar black 1TB GPU: MSI 750TI twin frozr II Case: Aerocool Xpredator X3 PSU: Corsair RM650

Link to comment
Share on other sites

Link to post
Share on other sites

No, the R7 265 was not the same gen as the 7850.  It was added to the lineup later. It might be a tonga, I can't remember... like the 285.

It was added later but still only used the older 7850 architecture, the 260 and 260X actually had newer architectures even though they were released earlier.

Link to comment
Share on other sites

Link to post
Share on other sites

If its 20% faster than a 750 ti then its as fast as a 270 then.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×