Jump to content

AMD RX Vega Fire Strike scores appear, FE scores about GTX 1080 performance

Delicieuxz
1 hour ago, Valentyn said:

It was a very accurate video though. It said Poor Volta(ge), and they clearly tried to cover up the voltage.

As we've seen from overclocked FE tests,  they really wanted to cover up the voltage, and power draw. It's clearly a sign in the video saying sorry, since your poor power power company is going to bend you over if you have Vega.

Dirty joke stealer ;)

 

I think I made it first...... errr

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Dirty joke stealer ;)

 

I think I made it first...... errr

You think, but I was doing it since the vid appeared and was posted at OcUK. ;):P

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

How will nvidia respond to rx vega?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Valentyn said:

You think, but I was doing it since the vid appeared and was posted at OcUK. ;):P

a9bd4220f86a7078fdfed5e39016225fb47079c12a1553672424a2cca55436e8.jpg

 

Tbh don't even know when I made the joke, was a while ago, but it was all on my own at least :). Also what video? Got a link I'd like to see it now.

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, leadeater said:

a9bd4220f86a7078fdfed5e39016225fb47079c12a1553672424a2cca55436e8.jpg

 

Tbh don't even know when I made the joke, was a while ago, but it was all on my own at least :). Also what video? Got a link I'd like to see it now.

 

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Valentyn said:

-snip-

Oh nah I thought you meant there was a video with the voltage joke in it

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Oh nah I thought you meant there was a video with the voltage joke in it

Silly man is confused. No worries. Just make some noise :D

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

On ‎2017‎-‎07‎-‎25 at 3:10 PM, MageTank said:

I hope these cards are not running at their max clock speed, or else they are in trouble. The Vega pro card was only able to hit 1700mhz on water according to GN after an undervolt, and these appear to be hitting 1630 in 3dmark. If they are running at their overclocked speeds, it's certainly not that impressive, given my 1070 at it's normal overclock hits 20.6k in Firestrike: http://www.3dmark.com/fs/12940802

 

We also have 1080's that hit 25k in Firestrike when overclocked: http://www.3dmark.com/fs/11977531 , http://www.3dmark.com/fs/9845353 , http://www.3dmark.com/fs/8942303

 

As for 1080 Ti's, hitting 31k+ is relatively easy : http://www.3dmark.com/fs/12205826http://www.3dmark.com/fs/12831070

 

Once you put the 1080 Ti's on water though, you start to get results like this: http://www.3dmark.com/fs/12685580http://www.3dmark.com/fs/12969308http://www.3dmark.com/fs/12268987 That's 32k-33k on water. Basically, when overclocked, Pascal is able to pull 10-15% higher scores than their stock boost clocks. If this is Vega at it's stock clocks, then color me impressed, but if this is overclocked, we may have a problem depending on it's price. 

Firestrike tends to score better with Nvidia GPU's in my experience.

 

Vega FE's performance is just weird.  Great compute power isn't translating into gaming performance and there are reports many of the new features aren't actually being utilized.  

 

Vega really just feels like a workstation card through and through with somewhat lackluster gaming performance. (Although really, 1080 level performance is still pretty solid.)

 

 

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, sgloux3470 said:

Firestrike tends to score better with Nvidia GPU's in my experience.

 

Vega FE's performance is just weird.  Great compute power isn't translating into gaming performance and there are reports many of the new features aren't actually being utilized.  

 

Vega really just feels like a workstation card through and through with somewhat lackluster gaming performance. (Although really, 1080 level performance is still pretty solid.)

 

 

In my experience, Firestrike in general has been terrible for gauging realworld performance. I mean, look no further than the Iris Pro 580 vs GTX 750. In firestrike, the GTX 750 absolutely smashes the IP 580 by a magnitude of 10, but relative gaming performance, they are nearly identical. So I don't look towards firestrike being an accurate representation of relative performance. I only pointed out the disparity in numbers when using overclocked Pascal cards, vs what was reported on that graph. Pascal overclocks easily. Like, extremely easy. I've had over 4 1070's personally, and not a single one of them failed to hit at least 2050mhz. A few hit 2100, and I have one that does 2228 (the one used in that submitted test, but was clocked at 2202 at the time). A buddy of mine that does crypto mining, also has 5 1070's, and most of them hit 2100 (with 2 being limited to 2050). 

 

While stock vs stock gives us an "out of the box" representation of the product, most aftermarket GPU's take advantage of that vast overclock headroom, and it will certainly skew the results. If Vega is clocked near it's max (like I suspect it is, given the Vega Pro's 1700mhz limitation on water and undervolted to improve power limit headroom), it means that once those Pascal cards are overclocked, Vega might not be able to respond in kind by overclocking to match/exceed the competition.

 

I could be wrong, and I certainly hope I am, it just doesn't look that good if this is what they are basing their relative performance metric on. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, sgloux3470 said:

Vega FE's performance is just weird.  Great compute power isn't translating into gaming performance

Just because a card performs well at one task doesn't mean it will perform well at a different one.

 

1 hour ago, sgloux3470 said:

there are reports many of the new features aren't actually being utilized.

I've heard that several new features aren't enabled/utilized many times over the course of the last month or so but when I ask about it I only ever get the answer "tile based rasterization". Do you have any other examples of features not being utilized?

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, LAwLz said:

Just because a card performs well at one task doesn't mean it will perform well at a different one.

 

I've heard that several new features aren't enabled/utilized many times over the course of the last month or so but when I ask about it I only ever get the answer "tile based rasterization". Do you have any other examples of features not being utilized?

But such a large disparity is odd.

 

I saw a detailed post about in on reddit.  I can't remember the jargon, but other than tile based rasterization there was stuff to do with fancy voltage regulation. (which is why undervolting makes a big difference, because it's supposedly using "failsafe" voltage which is really high.)

 

Tile based rasterization is the biggest one for performance though. 

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, sgloux3470 said:

But such a large disparity is odd.

What large disparity?

 

4 hours ago, sgloux3470 said:

Tile based rasterization is the biggest one for performance though. 

What kind of impact do you think tile based rasterization will have? I don't think your expectations will line up with reality, sadly.

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, LAwLz said:

Just because a card performs well at one task doesn't mean it will perform well at a different one.

 

I've heard that several new features aren't enabled/utilized many times over the course of the last month or so but when I ask about it I only ever get the answer "tile based rasterization". Do you have any other examples of features not being utilized?

The game changers should be their new geometry pipeline, tile based rasterizer and high bandwidth cache. If I remember right the first two aren't being used in the FE edition, and the last... well no idea

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, LAwLz said:

What kind of impact do you think tile based rasterization will have? I don't think your expectations will line up with reality, sadly.

Wasn't it the sole reason maxwell was exceptional compared to kepler?

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, laminutederire said:

Wasn't it the sole reason maxwell was exceptional compared to kepler?

On performance-per-watt? Pretty much yes.

On performance? Not at all.

 

Tile based rasterization is an efficiency increasing technique (that can also help remove a but of memory bottlenecking in certain situation), not a performance increasing feature.

 

The big difference between tile based rasterization and immediate render mode is that with TBR you can do hidden surface removal before you apply textures. That means you can skip fetching the textures for hidden objects, and that saves on power and it also removes some of the stress on the memory bus.

There are also some other benefits like being able to do blending without sending data back and forth between the cache and VRAM but pretty much everything is about being able to do the same work, without sending as much data back and forth to VRAM.

 

This does not reduce the amount of work the core has to do though, so in scenarios where the memory bandwidth isn't completely saturated there won't really be any improvement other than the reduced power of not needing to send as much data over the memory bus.

With the massive bandwidth Vega has, I find it hard to believe that memory speed will be a bottleneck in that many situations, especially not on lower resolutions. So that leaves the core the bottleneck, and TBR doesn't chance the amount of work the core has to do.

 

Fetching and discarding data to VRAM is very wasteful in terms of power usage, but not really that wasteful in terms of performance.

 

 

TL;DR: Tile based rasterization is about reducing the amount of info that needs to be sent and received to the VRAM. This in turn reduces the amount of stress on, and power used, by the memory. It does not however remove any stress from the core, which will be the main bottleneck for gaming except at really high resolutions.

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, LAwLz said:

On performance-per-watt? Pretty much yes.

On performance? Not at all.

 

Tile based rasterization is an efficiency increasing technique (that can also help remove a but of memory bottlenecking in certain situation), not a performance increasing feature.

 

The big difference between tile based rasterization and immediate render mode is that with TBR you can do hidden surface removal before you apply textures. That means you can skip fetching the textures for hidden objects, and that saves on power and it also removes some of the stress on the memory bus.

There are also some other benefits like being able to do blending without sending data back and forth between the cache and VRAM but pretty much everything is about being able to do the same work, without sending as much data back and forth to VRAM.

 

This does not reduce the amount of work the core has to do though, so in scenarios where the memory bandwidth isn't completely saturated there won't really be any improvement other than the reduced power of not needing to send as much data over the memory bus.

With the massive bandwidth Vega has, I find it hard to believe that memory speed will be a bottleneck in that many situations, especially not on lower resolutions. So that leaves the core the bottleneck, and TBR doesn't chance the amount of work the core has to do.

 

Fetching and discarding data to VRAM is very wasteful in terms of power usage, but not really that wasteful in terms of performance.

 

 

TL;DR: Tile based rasterization is about reducing the amount of info that needs to be sent and received to the VRAM. This in turn reduces the amount of stress on, and power used, by the memory. It does not however remove any stress from the core, which will be the main bottleneck for gaming except at really high resolutions.

A lot of people hate on amd for perf/Watt, ie "1080 perf at 1080ti consumption,  ayymd sucks" and so on. That could give them a bit of good or at least

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, LAwLz said:

What large disparity?

 

What kind of impact do you think tile based rasterization will have? I don't think your expectations will line up with reality, sadly.

13 tFlops compute versus FuryX's 8, which should be a 60% difference in performance ends up being much less.

 

Also the fact that in some workload sit's comparable to a Quado card that is essentially a TitanXp, in some it's on par with a 1080 and some it's barely better than the 1070.  The performance is all over the place depending on what it is running which indicates issues with the drivers.

 

Tile based rasterization should have huge benefits for power consumption and performance per watt

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

I remember AMD fanboys saying the only reason NVidia released the Titan Xp was because NVidia was "afraid" of forthcoming Vega.

 

597ba18e3d46f_tvmeme.jpg.672d6a4d564278db1f80423e1ec33c3e.jpg

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, VagabondWraith said:

I remember AMD fanboys saying the only reason NVidia released the Titan Xp was because NVidia was "afraid" of forthcoming Vega.

 

r      i      p

-----> Official Unofficial Favorite Keyswitch Type Survey <-----

 OWNER OF THE FASTEST INTEL iGPU ON LTT UNIGINE SUPERPOSITION [lol]

 

GAMING RIG "SNOWBLIND"

CPU i5-13600k | COOLING Corsair H150i Elite Capellix 360mm (White) | MOTHERBOARD Gigabyte Z690 Aero G DDR4 | GPU Gigabyte RTX 3070 Vision OC (White) | RAM  16GB Corsair Vengeance Pro RGB (White)SSD Samsung 980 Pro 1TB | PSU ASUS STRIX 850W (White)CASE  Phanteks G360a (White) | HEADPHONES  Beyerdynamic DT990 Pro | KEYBOARD Zoom75 (KTT Strawberry w/ GMK British Racing Green keycaps) | MOUSE  Cooler Master MM711 (White) MONITOR HP X32 1440p 165hz IPS

 

WORK RIG "OVERPRICED BRICK"

Mac Studio (M2 Ultra / 128GB / 1TB) | HEADPHONES  AirPods Pro 2 | KEYBOARD Logitech MX Mechanical Mini | MOUSE  Logitech MX Master 3S MONITOR 2x Dell 4K 32"

 

SECONDARY RIG "ALCATRAZ"

CPU i7-4770K OC @ 4.3GHz | COOLING Cryorig M9i (review| MOTHERBOARD ASUS Z87-PROGPU Gigabyte 1650 Super Windforce OC | RAM  16GB Crucial Ballistix Sport DDR3 1600 MHzSSD Samsung 860 Evo 512GB | HDD Toshiba 3TB 7200RPMPSU EVGA SuperNOVA NEX 750WCASE  NZXT H230 | HEADPHONES  Sony WH-1000XM3  | KEYBOARD Corsair STRAFE - Cherry MX Brown | MOUSE  Logitech G602 MONITOR LG 34UM58-P 34" Ultrawide

HOLA NIGHT THEMERS

GET YOUR ASS ON NIGHT THEME

OTHER TECH I OWN:

MacBook Pro 16" [M1 Pro/32GB/1TB] | 2022 Volkswagen GTI | iPhone 14 Pro | Sony a6000 | Apple Watch Series 8 45mm | 2018 MBP 15" | Lenovo Flex 3 [i7-5500U, HD5500 (fastest on the forum), 8GB RAM, 256GB Samsung 840 Evo] | PS5, Xbox One & Nintendo Switch [Home Theater setup] | DJI Phantom 3 Standard | AirPods 2 | Jaybird Freedom (two pairs) & X2 [long story, PM if you want to know why I have 3 pairs of Jaybirds]

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, sgloux3470 said:

13 tFlops compute versus FuryX's 8, which should be a 60% difference in performance ends up being much less.

That's because TFLOPS is not a good measurement of performance, except in very specific workloads.

 

1 hour ago, sgloux3470 said:

Also the fact that in some workload sit's comparable to a Quado card that is essentially a TitanXp, in some it's on par with a 1080 and some it's barely better than the 1070.  The performance is all over the place depending on what it is running which indicates issues with the drivers.

I don't see how that would indicate an issue with the driver. It might just be that the architecture lends itself well to some particular tasks and not so well to others.

I mean, a Radeon 580 is faster than a GTX 1080 for mining (I think?), but I don't see anyone going around saying "the numbers doesn't add up. The 580 should be faster than a 1080 for gaming. The numbers are all over the place!".

 

It might be a driver issue, but it might also just be that the architecture has some strengths and weaknesses.

 

1 hour ago, sgloux3470 said:

Tile based rasterization should have huge benefits for power consumption and performance per watt

Yep it most likely will. That's not what you said before though. You said it would be the biggest performance increasing feature that is yet to be enabled.

What kind of performance difference are you expecting from AMD enabling it?

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

Radeon 580 is faster than a GTX 1080 for mining (I think?), but I don't see anyone going around saying "the numbers doesn't add up. The 580 should be faster than a 1080 for gaming. The numbers are all over the place!".

GTX 1080 is faster performance wise and actually uses about the same amount of power, it's just that the 480/580 are much cheaper so have a better ROI for a shorter time span which in the crypto currency world is rather important if you look at the recent spikes and crashes. Also which coin is currently hot to mine matters too.

 

atxu2n.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/25/2017 at 0:51 PM, deXxterlab97 said:

It's a bit unfair to compare a reference designed card with an aftermarket ones. Also Founder's Edition is Nvidia naming, AMD just call it reference (or they don't)

Yeah it would be unfair if it was AMD aftermarket to Nvidia FE. But it's AMD Frontier edition vs Nividia aftermarket. For those looking for a 1080 ish card for less AMD looks good if this holds. Disappointed they couldn't compete with the Ti, was hoping Nvidia was going to just put out something awesome in a couple months but they'll probably still be conservative.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/27/2017 at 11:02 PM, sgloux3470 said:

But such a large disparity is odd.

 

I saw a detailed post about in on reddit.  I can't remember the jargon, but other than tile based rasterization there was stuff to do with fancy voltage regulation. (which is why undervolting makes a big difference, because it's supposedly using "failsafe" voltage which is really high.)

 

Tile based rasterization is the biggest one for performance though. 

It has less to do with a "failsafe voltage" and more to do with the fact that the card itself has a power/thermal limit that is being hit in it's stock configuration. Lowering the voltage not only lowers your thermals, it allows you to raise the clocks without hitting that same power limit. GamersNexus was able to find a balance that somewhat worked (albeit, still power limited in the end, even with thermals completely under control) by throwing a CLC on the card, and undervolting it to the point in which they were just able to push the card a little more due to the slight power headroom they gained from the undervolt.

 

I speculated that the Vega card needed undervolted, and I called it the very moment I saw the PCPer livestream results. Upon asking them to undervolt the card in their twitch chat, their rabid "tech expert" fans called me an idiot, since it only makes sense to use more volts to stabilize an overclock, not less. The irony being, we saw this exact issue with the RX 480, and just like the RX 480, undervolting the card not only made it more stable at stock, it improved overclocking headroom, lol. 

 

My only question is, if AMD knew they would be using excessive volts to "guarantee" stability on these cards, why in the world would they impose a strict power limit on them at the same time? Surely they knew overclocking would be drastically impacted. Not only that, but to do so on the stock air cooled version, which is even more crippled by it's thermal limitations on top of the aforementioned power limits, is silly. I am not asking them to hand out binned cards, but one would think they would have tested a batch (since almost all of them were retail cards, they didn't supply review samples) and see what the average required voltage for stability is. If what arrived at stock IS that average, then that only leaves more to be said of Vega, because that's pretty bad. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/25/2017 at 1:12 PM, SteveGrabowski0 said:

God another blower on a card with such high power consumption? Is that thing going to be a jet engine like Hawaii reference?

If it's anything like Hawaii, a regular aftermarket cooler won't even do the trick. You'll basically need water cooling if you want reasonable temps and noise. 

 

Source: Hawaii owner. 

Night Fury 2.0:

Spoiler

Intel Core i5-6500 / Cryorig H7 / Gigabyte GA-H170-D3H / Corsair Vengeance LPX 8GB DDR4 @ 2133MHz / EVGA GTX 1070 SC / Fractal Design Define R5 / Adata SP550 240GB / WD Blue 500GB / WD Blue 1TB / EVGA 750GQ 

Daily Drivers:

Spoiler

Google Pixel XL 128GB / Jaybird Bluebuds X3 / Logitech MX Master / Sennheiser HD 598 / 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×