Jump to content

Vega FE Hybrid Mod - Passing the 1080 but 400W+ Power Draw

Hunter259
2 hours ago, Misanthrope said:

Well to an extend but besides the geometry thing there's barely anything they can do to really improve on the gamer version and that's even more true of actually reducing this ridiculously high power draws.

Binning and undervolting.  AMD's stock voltage settings are based around the crappiest silicon they end up with, because they can't afford to scrap the lower-grade silicon.  My Fury doesn't use anywhere near the 300-ish watts it does out of the box when undervolted by 75mV (in theory it should be around 225W).  I can probably get a bit more out of it, but it started artifacting in Superposition when I was running it with 100mV pulled out.  I might try to overclock it a bit with the undervolt, since it's rock solid at -75.

 

Do I think they'll do this?  No.  I think they know that they'll make more money selling every chip they can, but there is absolutely something that can be done about the ridiculously high power draw.

SFF-ish:  Ryzen 5 1600X, Asrock AB350M Pro4, 16GB Corsair LPX 3200, Sapphire R9 Fury Nitro -75mV, 512gb Plextor Nvme m.2, 512gb Sandisk SATA m.2, Cryorig H7, stuffed into an Inwin 301 with rgb front panel mod.  LG27UD58.

 

Aging Workhorse:  Phenom II X6 1090T Black (4GHz #Yolo), 16GB Corsair XMS 1333, RX 470 Red Devil 4gb (Sold for $330 to Cryptominers), HD6850 1gb, Hilariously overkill Asus Crosshair V, 240gb Sandisk SSD Plus, 4TB's worth of mechanical drives, and a bunch of water/glycol.  Coming soon:  Bykski CPU block, whatever cheap Polaris 10 GPU I can get once miners start unloading them.

 

MintyFreshMedia:  Thinkserver TS130 with i3-3220, 4gb ecc ram, 120GB Toshiba/OCZ SSD booting Linux Mint XFCE, 2TB Hitachi Ultrastar.  In Progress:  3D printed drive mounts, 4 2TB ultrastars in RAID 5.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Phate.exe said:

Binning and undervolting.  AMD's stock voltage settings are based around the crappiest silicon they end up with, because they can't afford to scrap the lower-grade silicon.  My Fury doesn't use anywhere near the 300-ish watts it does out of the box when undervolted by 75mV (in theory it should be around 225W).  I can probably get a bit more out of it, but it started artifacting in Superposition when I was running it with 100mV pulled out.  I might try to overclock it a bit with the undervolt, since it's rock solid at -75.

 

Do I think they'll do this?  No.  I think they know that they'll make more money selling every chip they can, but there is absolutely something that can be done about the ridiculously high power draw.

Again like I said performance is measured as fps per price and price is the upfront cost + the electricity cost. So yes AMD could undervolt the card and that reduces power consumption but they wouldn't be fixing the problem of lower performance which requires getting the GPU to 1080 Ti level + equal or lower power consumption. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Yea personally I have no been expecting a card faster than the 1080Ti, not ever. Something between 1080 and 1080Ti would be a reasonable expectation.

It looks right on target for that, actually. Just at a higher power draw than Nvidia, because the current Nvidia uArch is really impressive.

 

I'm most curious to see the performance differentials on the different platforms: Z270, X299, AM4 & X399. The days of testing just a K SKU chip are done and that is going to matter going forward.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, bomerr said:

Again like I said performance is measured as fps per price and price is the upfront cost + the electricity cost. So yes AMD could undervolt the card and that reduces power consumption but they wouldn't be fixing the problem of lower performance which requires getting the GPU to 1080 Ti level + equal or lower power consumption. 

Not everyone cares about price and or power draw.  So the definition of performance changes depending on the individual.   Mostly people just refer to performance as raw FPS or workload speed not in relation to any other figure. .

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, mr moose said:

Not everyone cares about price and or power draw.  So the definition of performance changes depending on the individual.   Mostly people just refer to performance as raw FPS or workload speed not in relation to any other figure. .

We're going back in circles. Yes you as an individual may not care but overall people do care. Imagine both AMD and Nvidia have a GPU with the exact same fps and the exact same cost but different power consumption, which one will sell better? Without being equal or better in performance to Nvidia, AMD will lose market share and/or revenue. This hurts their ability to keep on producing video cards in the future. Get it? 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, bomerr said:

We're going back in circles. Yes you as an individual may not care but overall people do care. Imagine both AMD and Nvidia have a GPU with the exact same performance and the exact same cost but different power consumption, which one will sell better? Without being equal or better in performance to Nvidia, AMD will lose market share and/or revenue. This hurts their ability to keep on producing video cards in the future. Get it? 

not just me as an individual, but you rarely see people mention power draw when recommendation a GPU.  In fact the only time it really gets raised is in threads like this when it's mostly empty rhetoric or because someone is trying hard to defend a statement.       Power draw is not what's going to kill AMD if it dies.  

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, mr moose said:

not just me as an individual, but you rarely see people mention power draw when recommendation a GPU.  In fact the only time it really gets raised is in threads like this when it's mostly empty rhetoric or because someone is trying hard to defend a statement.      

What about all the people that said get Maxwell over the R9 series because they were more efficient? 

 

 

6 minutes ago, mr moose said:

 

Power draw is not what's going to kill AMD if it dies.  

 

Again hold all variables constant besides power consumption. So same cost, same fps, same render time but different power consumption. Which product will win? You aren't thinking (or aren't choosing to think) strategically. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, bomerr said:

What about all the people that said get Maxwell over the R9 series because they were more efficient? 

 

 

 

Again hold all variables constant besides power consumption. So same cost, same fps, same render time but different power consumption. Which product will win? You aren't thinking (or aren't choosing to think) strategically. 

 

Definitely not thinking 9_9

 

No one really cares as much as you do. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Hey, I don't mind more power consumption as long as the performance is there. From preliminary results, we've seen a potential future gaming card using 400 fucking watts from the wall against a product that uses half that. Not exactly a ringing endorsement. 

 

Now mind you, that's a professional card. So I won't delve into stating that Vega will be the same. But if the FE edition is a sign of things to come... oh boy.

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, VagabondWraith said:

Hey, I don't mind more power consumption as long as the performance is there. From preliminary results, we've seen a potential future gaming card using 400 fucking watts from the wall against a product that uses half that. Not exactly a ringing endorsement. 

 

Now mind you, that's a professional card. So I won't delve into stating that Vega will be the same. But if the FE edition is a sign of things to come... oh boy.

If the performance isn't there you're not likely to buy the card any way, right? 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, VagabondWraith said:

Hey, I don't mind more power consumption as long as the performance is there. From preliminary results, we've seen a potential future gaming card using 400 fucking watts from the wall against a product that uses half that. Not exactly a ringing endorsement. 

 

Now mind you, that's a professional card. So I won't delve into stating that Vega will be the same. But if the FE edition is a sign of things to come... oh boy.

Double? I know you didn't say double but damn it see people say it a lot.

amd-vega-fe-power-v-thermal_tixp.png

 

amd-vega-fe-power-firestrike.png

 

The OC results is quite shocking and fits that sort of narrative but the card won't do that unless you make it do it.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Double? I know you didn't say double but damn it see people see it a lot.

amd-vega-fe-power-v-thermal_tixp.png

 

amd-vega-fe-power-firestrike.png

Well, I was more so mentioning the 1080 since that's where performance is relative to right now. My 1080 Ti OC'd to 2050 core/ 12,400 mem will pull 320 watts from the wall by itself. The thing is though, it has the performance to match. A 1080 will  never come close to 300, let alone 400.

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, mr moose said:

If the performance isn't there you're not likely to buy the card any way, right? 

Correct. 

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, VagabondWraith said:

Well, I was more so mentioning the 1080 since that's where performance is relative to right now. My 1080 Ti OC'd to 2050 core/ 12,400 mem will pull 320 watts from the wall by itself. The thing is though, it has the performance to match. A 1080 will  never come close to 300, let alone 400.

Pitty the 1080 isn't in the graph but 320W fits in line with the other cards shown.

 

Edit:

wow epic grammar and half finished sentence fail, fixed

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

 

Definitely not thinking 9_9

 

No one really cares as much as you do. 

I'm sure their executive officers, board of directors and consultants care more than me. 

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, VagabondWraith said:

Hey, I don't mind more power consumption as long as the performance is there. From preliminary results, we've seen a potential future gaming card using 400 fucking watts from the wall against a product that uses half that. Not exactly a ringing endorsement. 

 

Now mind you, that's a professional card. So I won't delve into stating that Vega will be the same. But if the FE edition is a sign of things to come... oh boy.

Professional is mostly just marketing segmentation. Go game on a Quadro and it'll provide roughly the same performance as a GeForce of similar specs (cores, hz, etc). 

 

44 minutes ago, VagabondWraith said:

Well, I was more so mentioning the 1080 since that's where performance is relative to right now. My 1080 Ti OC'd to 2050 core/ 12,400 mem will pull 320 watts from the wall by itself. The thing is though, it has the performance to match. A 1080 will  never come close to 300, let alone 400.

my 1080 Ti has the exact same performance, 2050 core, 12400 memory and pull a max of 333W. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, bomerr said:

I'm sure their executive officers, board of directors and consultants care more than me. 

And they don't even have to pay for their cards. They must be real strategic about it.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

In the case of professional use workloads and applications this is not how it works out. There can be as much as 80% difference in performance between the Quadro drivers and GeForce drivers and some things outright will not work and cause errors. Gaming drivers are mine fields of hacks and tweaks to work around poor programming or issues in game engines, I'd link a video of a GPU driver developer I saw a while ago but I can't find it.

[Citation Needed], and it needs to be fairly recent too. None of that "here is a special driver for AutoCAD that is 7 years and it hasn't been updated since because they realized what a idiotic thing it was" crap. I have never seen any difference, so this 80% difference you have seen is quite hard to believe.

 

7 hours ago, leadeater said:

That is still making an informed decision on a purchase. You made an assumption based on input information about how a product will perform but without looking at an actual review of it you don't know. You couldn't state the exact performance of the product, there however have been comments that Vega gaming is going to be exactly the same as Vega FE which is different which was my point, which you understood, see above quoted comment by you.

I could have figured out the performance without looking at a review. I could have figured out how the 1700X would perform by looking at 1700 reviews.

We already know what the core performance like. We already know how much heat it kicks out and when it will throttle. We already know how much power it uses. We already know how fast the memory is. I find it ridiculous to say that we can't know how gaming Vega will perform. Our guesses will never be 100% correct (hell, not even benchmarks are 100% correct since there is a margin of error), but making a rough estimate should be piss easy, and accurate.

 

7 hours ago, leadeater said:

Drivers along with firmware plays a huge part in performance. It's not entirely tied to drivers but it plays a very big part. The different performance on different hardware is due to the hardware yes, I mean duh why even say that? The performance scales across the number of stream processors, frequency etc with the driver. That was the point, Fiji driver on hardware with the same number of Compute Units, Stream Processors, TMUs, ROPs and similar memory bandwidth performs pretty much equivalently to a product with the same makeup.

 

NCU is still GCN based so that is no surprise. If some features are locked out by either drivers or firmware and the frequency is the same the performance should be the same that is why Gamers Nexus did the test, to see if that held true and it did. So either the architecture improvements didn't work or are not functioning and the only performance increase is coming from the increase in clock rate which right now is the case.

So what you're saying is that you're surprised a product with roughly the same architecture, same amount of cores TMUs, ROPs, same clock speed and similar memory bandwidth performs the same, and you blame it all on drivers? Come on...

Architecture improvements might not always be about increasing IPC. A lot of the changes AMD have done might be to increase clock speeds without harming IPC or power/thermals too much. Or maybe perhaps this is the reason why Vega has been so slow to come out? Because they have tried to implement a lot of things which are promising, but once implemented they have not worked out. Doesn't sound that unreasonable either.

 

Anyway, this whole "it's just a driver thing" argument is really starting to piss me off because:

1) It gives people hope based on nothing. There is 0 evidence for it.

2) It builds a ton of hype and encourages fanboys to be obnoxious.

3) People are using it as a way to discard the quite large amount of evidence we have that Vega will perform a certain way. Basically, they are discarding real evidence because they think more highly of speculations.

4) It is an unfalsifiable hypothesis so while there is no evidence for it, there is nothing that can disprove it either. So to me it sounds just like the garbage cults and religious spread. "Just have faith and things will work out".

 

7 hours ago, leadeater said:

GCN is a common and scalable architecture, it wouldn't take much to get a Fiji driver working on Vega without implementing any new features or performance optimizations etc. Edit: It was even stated in a video by AMD that they were using a Fiji driver at that show, I'll be lazy and let someone else link that if they want to.

Don't care what AMD's marketing team says. If the driver works on Vega then it is by definition a Vega driver.

If you look at Mesa (which is in big part developed by AMD themselves) you will see that implementing Vega support was no small task, and that's without looking at all the binary blobs that are needed to make Vega function either. The Vega driver will obviously not be written from scratch but no GPU driver is. Saying that the current Vega driver is a "Fiji driver" is about as correct as saying the current Fiji driver is a Pitcairn driver, and I don't see anyone using that as an excuse for why the Fury X isn't matching a 1080 Ti in performance, right?

 

 

 

I would like to point something out to the people who keep bringing up tile based rasterization and are hoping that it will be the saving grace. It will not. Tile based rasterization will not have a significant impact on performance.

How do I know that? Because Nvidia has already implemented it so we can get clues from that.

What it will do are three things.

1) A slight increase in performance, but we're not talking double digit percentages here. The performance increase will most likely only show up in high resolution like 4K, and it will be in the single digit percentages.

2) It will increase efficiency. It is first and foremost an efficiency thing, not a performance thing.

3) If not implemented correctly, there will be compatibility issues with lots of existing software. That's probably why it is disabled on Vega FE.

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, LAwLz said:

[Citation Needed], and it needs to be fairly recent too. None of that "here is a special driver for AutoCAD that is 7 years and it hasn't been updated since because they realized what a idiotic thing it was" crap. I have never seen any difference, so this 80% difference you have seen is quite hard to believe.

That is rather hard since it's not done very often but here is Titan X (Maxwell) versus less powerful Quadro cards, Titan X being 2 to 7 times more compute power. This is only solid works but the differences between GTX and Quadro drivers applies to more than this. The Titan X is between 0% to 66% slower, remember even when matching the performance it's doing so with way more raw compute power.

 

Quote

The reason behind this is not easy to determine, although our best guess is that it has to do with the firmware and driver optimizations used with Quadro cards. Either way, if all you have is a GeForce card then Solidworks should function OK - although you will not be able to use features like RalView - but we would very highly recommend upgrading to an appropriate Quadro card as soon as possible. Especially if you use Solidworks professionally, the extra performance (not to mention simply using a supported card) means that you should almost never consider using a GeForce card instead of a Quadro card.

 

https://www.pugetsystems.com/labs/articles/Why-you-should-use-a-Quadro-video-card-in-Solidworks-2016-751/

 

Not once did I say wait for drivers, or say the drivers would improve anything at all, you are putting that on me. What I have said is wait for real products before passing final judgement, remember in your assessment of Vega FE that some features may not be functioning and that may be due to drivers or firmware. I don't care which it is I only care that the Vega gaming products are not out yet so it is premature to make any conclusive statements about it.

 

Quote

Finally, along with outlining their new packed math formats, AMD is also confirming, at a high level, that the Vega NCU is optimized for both higher clockspeeds and a higher IPC. It goes without saying that both of these are very important to overall GPU performance, and it’s an area where, very broadly speaking, AMD hasn’t compared to NVIDIA too favorably. The devil is in the details, of course, but a higher clockspeed alone would go a long way towards improving AMD’s performance. And as AMD’s IPC has been relatively stagnant for some time here, improving it would help AMD put their relatively sizable lead in total ALUs to good use. AMD has always had a good deal more ALUs than a comparable NVIDIA chip, but getting those ALUs to all do useful work outside of corner cases has always been difficult.

 

That said, I do think it’s important not to read too much into this on the last point, especially as AMD has drawn this slide. It’s fairly muddled whether “higher IPC” means a general increase in IPC, or if AMD is counting their packed math formats as the aforementioned IPC gain.

http://www.anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser/2

 

AMD is claiming higher IPC, either that's not actually true, misrepresented or confined to specific use cases. That is why I brought the test up.

 

Quote

As someone who analyzes GPUs for a living, one of the more vexing things in my life has been NVIDIA’s Maxwell architecture. The company’s 28nm refresh offered a huge performance-per-watt increase for only a modest die size increase, essentially allowing NVIDIA to offer a full generation’s performance improvement without a corresponding manufacturing improvement. We’ve had architectural updates on the same node before, but never anything quite like Maxwell.

http://www.anandtech.com/show/10536/nvidia-maxwell-tile-rasterization-analysis

 

I'll let you think about why that feature could be good and what a lot of people are pointing to to say Vega FE and Vega architecture itself is bad.

 

As for the rest of it I can't be bothered debating it right now, but you did miss my point so not much reason to continue. You're just picking specific points to argue and ignoring the narrative of the post which is the important part.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Phate.exe said:

Binning and undervolting.  AMD's stock voltage settings are based around the crappiest silicon they end up with, because they can't afford to scrap the lower-grade silicon.  My Fury doesn't use anywhere near the 300-ish watts it does out of the box when undervolted by 75mV (in theory it should be around 225W).  I can probably get a bit more out of it, but it started artifacting in Superposition when I was running it with 100mV pulled out.  I might try to overclock it a bit with the undervolt, since it's rock solid at -75.

 

Do I think they'll do this?  No.  I think they know that they'll make more money selling every chip they can, but there is absolutely something that can be done about the ridiculously high power draw.

Fury Nano did this: performance noticeably and greatly suffered. So is either 1080 levels (bit above, bit below) levels of performance at 150 extra watts or similar wattage that drops the card to 1070 performance?

 

Both situations are clear failures imho, after this long.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

@leadeater The AMD folks on any of the forums are apparently getting a little annoyed about the "Fiji drivers" bit, though given the fact that the current Vega FE gaming mode drivers are, in fact, a branch of the Fiji drivers, it is a tad funny. However, the current drivers are stable and work. So no one can complain about that.

 

However, the 3Dmark scores that have shown up look a lot like around a 15% uplift at the same clock speed. (RX Vega will be higher core clock than Vega FE.)  Beyond that, it's all speculation. AMD needs it to slot in between the 1080 and 1080 Ti to retail the top SKU at $599. It's a matter of getting it under the thermal limits. The 64 CU GPU is more than doable of it, it's just a question of thoroughput.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Taf the Ghost said:

@leadeater The AMD folks on any of the forums are apparently getting a little annoyed about the "Fiji drivers" bit, though given the fact that the current Vega FE gaming mode drivers are, in fact, a branch of the Fiji drivers, it is a tad funny. However, the current drivers are stable and work. So no one can complain about that.

 

However, the 3Dmark scores that have shown up look a lot like around a 15% uplift at the same clock speed. (RX Vega will be higher core clock than Vega FE.)  Beyond that, it's all speculation. AMD needs it to slot in between the 1080 and 1080 Ti to retail the top SKU at $599. It's a matter of getting it under the thermal limits. The 64 CU GPU is more than doable of it, it's just a question of thoroughput.

There's also three groups of people and all of them are getting annoyed at each other with the exception of one group who is annoyed at both. There is the AMD hype group, annoying for obvious reasons but then there is the anti-hype group who seems to be oblivious to the fact they are just as annoying if not more so. The 3rd group just wants to discuss a product and is interested in technology and wishes the other two groups would leave us alone.

 

Being in the middle makes it almost impossible to actually look like you are if you have to respond to either group since in doing so it makes you look like the reverse side of them or get accused of being it.

 

If you're part of the anti-hype group for the love of all things say your piece move on, hold your 'I told you so card' and then jam it in everyone's faces at launch. That can be your reward for being both correct and not annoying.

 

If you're part of the hype group for the love of all things tone it down so the anti-hyper's don't rush in and derail everything.

 

P.S. LTT forum is the only forum I use, I never look at anything else especially reddit. If I have quoted one or look at one it's from a direct google search or linked in a post here. If other forums are annoying you just remember I'm not seeing those.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, LAwLz said:

Your analogy doesn't make any sense though because:

1) I am not claiming to have any knowledge of how the card will perform. The people I am referring to as AMD fanboys however, do. They (for example a lot of people on the /r/AMD subreddit) are convinced that there will be big improvements with driver updates and all I am saying is that we do not know.

2) I did not say product X is terrible for purpose Y.

3) All I did was point out that saying that an imaginary future update will do X or Y is an unfalsifiable hypothesis. Nobody can argue against the statement that a future update will make my 7850 as powerful as a 1080 Ti. Nobody can give me any evidence or fact that will disprove that statement, nor can anyone disprove the performance gains of this imaginary Vega update.

That's why I think it is a silly argument to begin with. It's like talking to religious people. Of course it's fair to speculate, but when you start claiming things as fact and try to persuade people to spend their money on brand X over brand Y based on speculation that doesn't have a solid foundation then that rubs me the wrong way.

(and before you respond please note that I did not specify anyone in particular when I made the comment you were referring to. I can't control whether or not someone associates themselves with the term "AMD fanboy")

 

 

Not sure if this is aimed at me or not but I'll respond anyway.

If the core is the same, and the memory is roughly the same, then you can make a very safe bet that it will perform about the same. Of course you can not say with 100% certainty that will be the case, but it will be with 95% certainty. If I had those odds at the lottery then I'd play all day every day, and live the good life.

Also, the same has been true for Radeon vs FirePro as well, so it's not just GeForce vs Quadro. It's been this way for as long as I can remember (which is a statement I can make very lightly since I generally have really shitty memory).

It's not an assumption since assumption implies that there is no proof. In this case we have years of history and countless of examples to base our theories on.

 

You can not declare it a fact, but you can make that assumption with the utmost confidence since you will most likely be correct.

 

Well, that's not really a sensible position to hold. Are you going to say that my decision to buy a Ryzen chip was not based on facts (such as benchmarks) because while I had seen reviews showing great performance, the chips those reviewers used were not the exact one I ended up with? They were merely similar.

Would you say that I can't say it's a fact the 1700 will perform worse than the 1700X out of the box? Since they are the same chip but with different clock speeds I think it is safe to say the 1700X will perform better out of the box (everything else being equal).

Of course there is the possibility that gaming Vega will not use the same core as Vega Fe, but if it does then "guestimates" can and will most likely be very accurate.

 

"Fiji drivers giving Fiji levels of performance" implies (or assumes rather) that performance is entirely tied to drivers and hardware is quite irrelevant. That's silly and you know it. The same driver on different hardware can have widely different results.

Vega running a "Fiji gaming driver" (do you understand how silly that entire concept is?) resulting in Fiji-tier performance, once you do things like change clock speeds of the Vega card, might just be a coincidence.

 

Drivers does not work that way. Drivers translate system calls to commands tailored for the hardware. If the Vega driver were translating "gaming system calls" to a structure intended for Fiji then things would be crashing left and right. Try installing a 1 year old driver package from AMD on a system with a Vega Fe GPU and see how well that will work. That's essentially what you are saying is happening when you are strongly implying that Vega is using "Fiji drivers for gaming".

That's just not how drivers work. Period.

If the driver works with Vega then it's a Vega driver. It is not a Fiji driver because that would simply not work. It would crash. Different hardware architectures needs different drivers and you can not interchange them.

Vega is still gcn and all gcn cards support the same base instructions which means it wouldn't crash left and right. As long as AMD modified it a bit.

the thing is that Vega has new instructions (according to AMD) but none of the tests made show that they are working (Vega should be much faster at tesselation due to the extra geometry output. So it would be faster at unigine heaven per clock than fiji for example) 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LAwLz said:

If you look at Mesa (which is in big part developed by AMD themselves) you will see that implementing Vega support was no small task, and that's without looking at all the binary blobs that are needed to make Vega function either. The Vega driver will obviously not be written from scratch but no GPU driver is. Saying that the current Vega driver is a "Fiji driver" is about as correct as saying the current Fiji driver is a Pitcairn driver, and I don't see anyone using that as an excuse for why the Fury X isn't matching a 1080 Ti in performance, right?

The FE driver isn't the Fiji driver. The driver shown earlier in the year with the better gaming benches was a Fiji driver jury-rigged to work on Vega.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

That is rather hard since it's not done very often but here is Titan X (Maxwell) versus less powerful Quadro cards, Titan X being 2 to 7 times more compute power. This is only solid works but the differences between GTX and Quadro drivers applies to more than this. The Titan X is between 0% to 66% slower, remember even when matching the performance it's doing so with way more raw compute power.

1) That is not a proper comparison which can be used to conclude that the driver is responsible for the performance difference. When you want to compare something you need to change as few variables as possible. You can't just change both the software and hardware. Find me a Quadro card with the same core as a GeForce card, and then we can test things.

2) "Raw compute power", if you're referring to FLOPS, is not an indicator of performance outside of very specific things. You can not just look at the FLOPS and go "this card has 7 times more compute power therefore it will perform 7 times better".

3) If you look at the comments people are posting how to import the SolidWorks profile using Nvidia Inspector, and that will significantly boost performance on the GeForce cards. Nvidia tend to artificially disable things to segment their cards. Just take 10 bit color as another example. This has, as far as I aware, never happened when it comes to GeForce/Radeon things ported to Quadro/FirePro cards though (so for example Radeon getting specific features enabled which is artificially locked out on FirePro, with the exception of overclocking).

4) As you can see, the profiles are not card specific. Now feel free to correct me if I am wrong here, but I am fairly sure that's because the application profiles runs on top of the driver, before the system calls reaches the GPU.

 

I believe point 4 is why AMD claims that Vega is running a "Fiji driver". Because it uses profiles for games which were developed for Fiji and other cards. However, the reason why it even works at all is because profiles do not need to be custom tailored for each GPU they can be used on. I am sure that there are game profiles originally developed on Cypress, Cayman, Pitcairn and so on in the latest AMD drivers for Polaris. That does not mean the 580 runs on a Cypress driver though, right?

 

3 hours ago, leadeater said:

Not once did I say wait for drivers, or say the drivers would improve anything at all, you are putting that on me. What I have said is wait for real products before passing final judgement, remember in your assessment of Vega FE that some features may not be functioning and that may be due to drivers or firmware. I don't care which it is I only care that the Vega gaming products are not out yet so it is premature to make any conclusive statements about it.

No you did not, but you responded to my post where I was talking about people who do.

I totally agree that we should wait for reviews to come out before making up your mind about Vega. The people I have issues with are those who are already coming up with excuses for why Vega will be better than expected. We have a fairly solid indicator of what it will be like, so people are making up ridiculous excuses and trying to build hype for it.

Just look at the amount of people on here who parrots "tile based rasterization" without having the first clue about what it is or what effects it will have, yet they think it will be the saving grace of Vega.

 

4 hours ago, leadeater said:

http://www.anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser/2

 

AMD is claiming higher IPC, either that's not actually true, misrepresented or confined to specific use cases. That is why I brought the test up.

Just vague marketing stuff. I wouldn't put to much faith in it being true.

 

 

3 hours ago, leadeater said:

http://www.anandtech.com/show/10536/nvidia-maxwell-tile-rasterization-analysis

 

I'll let you think about why that feature could be good and what a lot of people are pointing to to say Vega FE and Vega architecture itself is bad.

It is good, but like I said people are expecting it to do miracles. It most likely won't. It might have a small effect on 4K gaming, and it will make the GPU more efficient. Don't expect anything else though. I've seen several people on this forum make wild claims like it will have a significant performance increase once enabled. I have yet to see any evidence whatsoever that supports that claim. I have however seen plenty of evidence, mostly based on Maxwell, that it will not be performance increasing feature. It is first and foremost an efficiency feature.

 

4 hours ago, leadeater said:

As for the rest of it I can't be bothered debating it right now, but you did miss my point so not much reason to continue. You're just picking specific points to argue and ignoring the narrative of the post which is the important part.

Of course I am picking specific points, because those are the points I have been arguing against every since I first posted in this thread. It is those specific things that I have a problem with people parroting even though they lack even basic understanding of what they are talking about.

 

 

3 hours ago, Taf the Ghost said:

the fact that the current Vega FE gaming mode drivers are, in fact, a branch of the Fiji drivers, it is a tad funny.

Yep, and the Fiji drivers are in fact a "branch of the Cypress drivers".

 

 

 

3 hours ago, cj09beira said:

Vega is still gcn and all gcn cards support the same base instructions which means it wouldn't crash left and right. As long as AMD modified it a bit.

the thing is that Vega has new instructions (according to AMD) but none of the tests made show that they are working (Vega should be much faster at tesselation due to the extra geometry output. So it would be faster at unigine heaven per clock than fiji for example) 

"Modified it a bit" is an understatement. The Mesa project had roughly 150 patches submitted to just get Vega working at all (on top of the special binary blobs AMD submitted). At what point do you say something is a Fiji driver vs a Vega driver? By definition, as soon as it is working to some degree it is a Vega driver. AMD will not rewrite their entire driver stack just for Vega, so if your definition is some arbitrary threshold of newly written code for this specific GPU you might as well say Vega will never have a driver released.

 

Or think of it this way. AMD has been showing working Vega drivers since late last year. That means that they have probably been working on the drivers for at the very least 8 months (and this is assuming they threw together a working driver in like a week). Do you really think they spent over 8 months getting the driver to where it is now, and then you expect them to make huge improvements from just 1 extra month?

The drivers we got today will most likely not be very different from what we will see used in Vega reviews in a month or two.

 

Got any examples of these new instructions?

Is it better at tesselation because of a much beefier tesselation units or because of new instructions? That matters a lot when determining what a future update might improve on.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×