Jump to content

AMD Releasing Public Mantle SDK This Year, Encourages Nvidia and Intel to Use it... For Free.

TERAFLOP

And what process do you think I am going through?

 

And what made you think I was asking for your advice on spending that hundred dollars???

Quote

The problem is that this is an nVidia product and scoring any nVidia product a "zero" is also highly predictive of the number of nVidia products the reviewer will receive for review in the future.

On 2015-01-28 at 5:24 PM, Victorious Secret said:

Only yours, you don't shitpost on the same level that we can, mainly because this thread is finally dead and should be locked.

On 2016-06-07 at 11:25 PM, patrickjp93 said:

I wasn't wrong. It's extremely rare that I am. I provided sources as well. Different devs can disagree. Further, we now have confirmed discrepancy from Twitter about he use of the pre-release 1080 driver in AMD's demo despite the release 1080 driver having been out a week prior.

On 2016-09-10 at 4:32 PM, Hikaru12 said:

You apparently haven't seen his responses to questions on YouTube. He is very condescending and aggressive in his comments with which there is little justification. He acts totally different in his videos. I don't necessarily care for this content style and there is nothing really unique about him or his channel. His endless dick jokes and toilet humor are annoying as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

That is literally the stupidest thing I have ever heard Faa. AMD can't access the library and any optimization is guess work. Do not pass go. It is literally that simple. This only happens in Games Works games which use the library at the core.

 

http://www.extremetech.com/extreme/173511-nvidias-gameworks-program-usurps-power-from-developers-end-users-and-amd

 

You are saying that a GTX 770 should perform like a R9 290. That is straight up asinine. I just upgraded from one to another and the GPU's are not close...at all. Not even a little bit. One can run 4k 30 FPS plus on Shadows of Mordor which uses a customized ASSASSIN'S CREED ENGINE and the other can't even come close even at much lower settings where the VRAM is less of a problem.

 

If is like comparing a chevette to a camaro in a race and then saying the camaro sucks because it has to run the race with flat tires, pulling a boat.

Again, I've explained you this a few threads ago. How many times do I have to explain you this? Drivers can cause CPU overhead making a CPU bottleneck bigger or introducing one which is what happened many times on AMD's side. Haven't you seen the 720p tests I included where they simulated a CPU bound scenario to see which GPU performs better? AMD did horrible, to a point that a 770 will even outperform it.

Link to comment
Share on other sites

Link to post
Share on other sites

And what made you think I was asking for your advice on spending that hundred dollars???

 

I advise everyone who presents idealism in the form of sarcasm.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Again, I've explained you this a few threads ago. How many times do I have to explain you this? Drivers can cause CPU overhead making a CPU bottleneck bigger or introducing one which is what happened many times on AMD's side. Haven't you seen the 720p tests I included where they simulated a CPU bound scenario to see which GPU performs better? AMD did horrible, to a point that a 770 will even outperform it.

 

You explained nothing. You fail to recognize that the games library is CLOSED on Games Works games. 

 

Non games work games. AMD> GTX 980 in Shadows of Mordor on a freakin licensed Ubisoft engine and benchmarks change depending on resolution/aa used. Case closed.

 

You also often link STOCK R9 290 benchmarks with often throttling reference cards against Nvidia cards and will often also link 1280 resolution benchmarks that mean nothing. You do not buy a R9 290/x, 970/x to play without supersampling or multisampling AA at 1080p. That would literally be the stupidest thing imaginable. At a low resolution the GTX 980 will beat my R9 290 (which is on a very light overclock as far as voltage) in a benchmark because it is better at some things. In a high resolution benchmark/high AA benchmark my GPU will beat the 980. Reason? Bandwidth. 

 

The Nvidia 9x0's are faster at low res for reasons you can easily see in GPU-Z. The R9 290's are faster at high res/aa for the same reasons (bandwidth).You are trying to use that, to claim CPU overhead, which is laughable. 

 

nvidia-geforce-gtx980-gpuz.jpg

 

1HQGJaT.png?1

CPU:24/7-4770k @ 4.5ghz/4.0 cache @ 1.22V override, 1.776 VCCIN. MB: Z87-G41 PC Mate. Cooling: Hyper 212 evo push/pull. Ram: Gskill Ares 1600 CL9 @ 2133 1.56v 10-12-10-31-T1 150 TRFC. Case: HAF 912 stock fans (no LED crap). HD: Seagate Barracuda 1 TB. Display: Dell S2340M IPS. GPU: Sapphire Tri-x R9 290. PSU:CX600M OS: Win 7 64 bit/Mac OS X Mavericks, dual boot Hackintosh.

Link to comment
Share on other sites

Link to post
Share on other sites

 

When it didnt kick in yeah. Go google driver overhead tests, there are plenty available.

 

Considering you were referring exclusively to BF4, "Explain me why BF4 runs quite shit without Mantle? I'm seeing lots of people reporting CPU bottlenecks in DX mode while not having it with Mantle at all, even Intel CPU's like 4670K/4770K's are bottlenecking."

BF4-MidB-OT.png

Yes it is only the 270x but it was the only one i found. Rather why dont you source some benchmarks to prove your point.

i7 4770K @ 4.5GHZ, NH-D14, Kingston HyperX Black 8GB, Asus Z87-A, Fractal Design XL R2, MSI TF IV R9 280x, BTFNX 550G

Link to comment
Share on other sites

Link to post
Share on other sites

I agree with all that, that isn't the point I was trying to make.

 

I think I get your point: if a company grows and becomes successful they will eventually close themselfs, locking their customers into their proprietary ecosystem. I just think that such model is just another model, because you have companys with tremendous success that don't behave like this. I mean they do create more value in their ecosystem but remain opensource so other partys can contribute to it - picking up Google example - Android is still their property, and it's Google who dictates what is the path to take. Much like Mantle - free for everyone, but it's AMD who will dictate the path.

In the end it will depend who they will take the feedback from their "open-sourceness". Google surely takes alot from developers and their users into Android.

 

 

Oh please not this again, it has been done to death in so many other threads already.

 

Aww, memories :D

Link to comment
Share on other sites

Link to post
Share on other sites

I think I get your point: if a company grows and becomes successful they will eventually close themselfs, locking their customers into their proprietary ecosystem. I just think that such model is just another model, because you have companys with tremendous success that don't behave like this. I mean they do create more value in their ecosystem but remain opensource so other partys can contribute to it - picking up Google example - Android is still their property, and it's Google who dictates what is the path to take. Much like Mantle - free for everyone, but it's AMD who will dictate the path.

In the end it will depend who they will take the feedback from their "open-sourceness". Google surely takes alot from developers and their users into Android.

 

 
 

Aww, memories :D

 

Yes pretty much.  I think being in a position of having good revenue streams and money in the bank means companies can test the water with proprietary products without jeopardizing important cash flow. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Again, I've explained you this a few threads ago. How many times do I have to explain you this? Drivers can cause CPU overhead making a CPU bottleneck bigger or introducing one which is what happened many times on AMD's side. Haven't you seen the 720p tests I included where they simulated a CPU bound scenario to see which GPU performs better? AMD did horrible, to a point that a 770 will even outperform it.

Man, you gotta stop making excuses as to why it shouldn't be used. If DX12 has the same scenario, you wouldn't care. And you gotta stop complaining about "open-source" when there is DirectX (which you seem to give a chance to, despite Mantle's close similarity to it). When DX12 launches, we might not see developers use it for quite some time too.

If the reason you don't accept Mantle is because its AMD's product, then you gotta rethink that. Imagine the PR when Nvidia actually implements Mantle and BEATS AMD by quite a bit. I bet you guys would all enjoy Nvidia beating AMD equivalent on an AMD API (despite its closeness to Microsoft's DX API), yet you just don't want to see (or accept) Nvidia using AMD's Mantle. 

Link to comment
Share on other sites

Link to post
Share on other sites

Man, you gotta stop making excuses as to why it shouldn't be used. If DX12 has the same scenario, you wouldn't care. And you gotta stop complaining about "open-source" when there is DirectX (which you seem to give a chance to, despite Mantle's close similarity to it). When DX12 launches, we might not see developers use it for quite some time too.

If the reason you don't accept Mantle is because its AMD's product, then you gotta rethink that. Imagine the PR when Nvidia actually implements Mantle and BEATS AMD by quite a bit. I bet you guys would all enjoy Nvidia beating AMD equivalent on an AMD API (despite its closeness to Microsoft's DX API), yet you just don't want to see (or accept) Nvidia using AMD's Mantle. 

 

That will happen surely, but as indirectly as possible (DX12). NV will never outright admit that Mantle is a great API.

Quote

The problem is that this is an nVidia product and scoring any nVidia product a "zero" is also highly predictive of the number of nVidia products the reviewer will receive for review in the future.

On 2015-01-28 at 5:24 PM, Victorious Secret said:

Only yours, you don't shitpost on the same level that we can, mainly because this thread is finally dead and should be locked.

On 2016-06-07 at 11:25 PM, patrickjp93 said:

I wasn't wrong. It's extremely rare that I am. I provided sources as well. Different devs can disagree. Further, we now have confirmed discrepancy from Twitter about he use of the pre-release 1080 driver in AMD's demo despite the release 1080 driver having been out a week prior.

On 2016-09-10 at 4:32 PM, Hikaru12 said:

You apparently haven't seen his responses to questions on YouTube. He is very condescending and aggressive in his comments with which there is little justification. He acts totally different in his videos. I don't necessarily care for this content style and there is nothing really unique about him or his channel. His endless dick jokes and toilet humor are annoying as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You explained nothing. You fail to recognize that the games library is CLOSED on Games Works games. 

 

Non games work games. AMD> GTX 980 in Shadows of Mordor on a freakin licensed Ubisoft engine and benchmarks change depending on resolution/aa used. Case closed.

 

You also often link STOCK R9 290 benchmarks with often throttling reference cards against Nvidia cards and will often also link 1280 resolution benchmarks that mean nothing. You do not buy a R9 290/x, 970/x to play without supersampling or multisampling AA at 1080p. That would literally be the stupidest thing imaginable. At a low resolution the GTX 980 will beat my R9 290 (which is on a very light overclock as far as voltage) in a benchmark because it is better at some things. In a high resolution benchmark/high AA benchmark my GPU will beat the 980. Reason? Bandwidth. 

 

The Nvidia 9x0's are faster at low res for reasons you can easily see in GPU-Z. The R9 290's are faster at high res/aa for the same reasons (bandwidth).You are trying to use that, to claim CPU overhead, which is laughable. 

 

Read this 500 times before to understand it; A stock R9 290 throttling around 50% usage? Might as well claim they throttle at idle, lol. I'm not talking about which cards are faster we are talking which cards do better when the CPU is the bottleneck finding out which ones cause the most overhead on the CPU. AMD can't outperform Nvidia or get close to them if their drivers cause way too much CPU overhead.

i7_bf4_1280.png

Right the difference between a 780 & 290x with Directx when the CPU is the bottleneck is 80%. The difference between DX & Mantle goes up to nearly 100%, you could easily conclude the 290X was running at 50% load with DX and 99% with Mantle. Once Mantle removed the bottleneck it outperformed the 780 because it's the faster card. You want that 290x at full load to outperform a 780.Their drivers causing such a huge CPU bottleneck will turn into this;

bf4_cpu_gpu.png

A 770 thats outperforming a 290x. You cause your drivers to create such a huge bottleneck the diminishing returns will be that even a weaker card from your competitor will hit 99% load when your best cards aren't even hitting 50% usage. A 770 at 99% load > 290x at 50% load. Considering that a 290x wasn't adding any performance over a 280X you could pull out that the CPU was bottlenecking.

Watch_Dogs-CPU-Driver-Overhead-Benchmark

Same story; a 290X getting outperformed by almost 50%.

http--www.gamegpu.ru-images-stories-Test

http--www.gamegpu.ru-images-stories-Test

2 games already that werent related to Gameworks at all. Right the Star Swarm benchmark AMD made to show-off Mantle;

i7_sw_1920.png

Same story again a 290x at dx11 getting outperformed by nearly 100% by a 780 with the 337.50 that reduced cpu overhead.

battlefield-4-mantle-benchmarks.gif

http--www.gamegpu.ru-images-stories-Test

http--www.gamegpu.ru-images-stories-Test

Crysis 3 then, a 8350 gets a 100% improvement when it moves from a 290x to a 780 Ti;

c3_r1920a.png

c3_r1920n.png

3 Games already and a AMD-in-house-made-Mantle benchmark that didn't have anything to do with Gameworks.

http--www.gamegpu.ru-images-stories-Test

http--www.gamegpu.ru-images-stories-Test

Thats 33% which is way too much. Two 780ti's at full load vs 295x2 at full load would be a typical 3-10% difference, sadly that 295x2 isn't managing to cap out at 99% usage there.

http--www.gamegpu.ru-images-stories-Test

http--www.gamegpu.ru-images-stories-Test

It's capped but you get my point, a 4100 performs 3% better with a 780Ti than with a 295x2. No issues here since AMD used Mantle.

When you take mantle out;

http--www.gamegpu.ru-images-stories-Test

The 4100 performs up to 50% worse than with two 780ti's in SLI. Driver overhead and overhead. There are a bunch more, ain't putting any effort in this anymore. I'll offer you to test your 290 vs my 780 with low res/settings just to avoid getting the GPU at full load so we have both a cpu bound scenario and lets see how well it keeps up. I told you earlier, I give a fuck about which gpu's are faster. You're claiming Nvidia is crippling AMD's performance when AMD is getting ridiculously outperformed in their MANTLE titles with DX11. You're not making any sense. Differences are all the way going up to nearly 100%, wake up the DX11 drivers on AMD are completely garbage.

Mantle is currently nothing more than a selling gimmick, why should Nvidia use it when it barely performs better in terms of adding GPU performance or reducing cpu bottlenecks?

 

Man, you gotta stop making excuses as to why it shouldn't be used. If DX12 has the same scenario, you wouldn't care. And you gotta stop complaining about "open-source" when there is DirectX (which you seem to give a chance to, despite Mantle's close similarity to it). When DX12 launches, we might not see developers use it for quite some time too.

If the reason you don't accept Mantle is because its AMD's product, then you gotta rethink that. Imagine the PR when Nvidia actually implements Mantle and BEATS AMD by quite a bit. I bet you guys would all enjoy Nvidia beating AMD equivalent on an AMD API (despite its closeness to Microsoft's DX API), yet you just don't want to see (or accept) Nvidia using AMD's Mantle. 

Stop being silly, please. You're forcing me to agree with your opinions. I'm nothing with your 5% performance Mantle adds over Nvidia's DX11, let it add 50% orsomething then I'll change my mind. Let AMD actually fix their DX11 drivers to where Nvidia is getting and let them then show how much of a difference Mantle makes.

 

 

Considering you were referring exclusively to BF4, "Explain me why BF4 runs quite shit without Mantle? I'm seeing lots of people reporting CPU bottlenecks in DX mode while not having it with Mantle at all, even Intel CPU's like 4670K/4770K's are bottlenecking."Yes it is only the 270x but it was the only one i found. Rather why dont you source some benchmarks to prove your point.

See the benchmarks above, added driver overhead comparisons from 3 different sources already in BF4. Also a 270X isn't a card thats hard to feed, something like a 980/290x or even SLI/CF will tax the CPU harder.

Link to comment
Share on other sites

Link to post
Share on other sites

 

Stop being silly, please. You're forcing me to agree with your opinions. I'm nothing with your 5% performance Mantle adds over Nvidia's DX11, let it add 50% orsomething then I'll change my mind. Let AMD actually fix their DX11 drivers to where Nvidia is getting and let them then show how much of a difference Mantle makes.

 

 

You've seen other people's arguments. Apparently a 980 was beaten by a 290X in Shadows of Mordor. So, I don't think the entire problem is in the optimization. Have you thought about how much a Nvidia card will benefit from Mantle?  If Nvidia released that they can achieve a 5% (or more) boost in Mantle, would you be for it? you think Nvidia would care if AMD's drivers are originally bad that now with Mantle, they are getting beaten?

 

Okay, now let's talk about the performance part. Also, apparently 5% boost isn't enough to actually care to do something? I have to wonder why Intel keeps releasing new chips that are pretty much more or less ~10% boost in the speed. Actually, why do both Nvidia and AMD keep releasing new cards if they're ~10% increases from the previous king? Thinking about this, I don't think you're argument makes much more sense. I'm pretty sure they care about that 5% increase (If it is ~5% increase on a Nvidia card from DX11 to Mantle). And this 5% increase can most likely affect all/most of their current cards.

Though, you can freeze your tech upgrades for ~5 graphic cards generations then get the ~%50 in performance that you've wanted to, though that performance boost would probably only apply on the current year of games, assuming games 5 years from now will be more demanding. Implementing for Mantle will probably just be a bonus, since DX12 most likely will feature the same (or better) kind of thing Mantle did....

Link to comment
Share on other sites

Link to post
Share on other sites

You've seen other people's arguments. Apparently a 980 was beaten by a 290X in Shadows of Mordor. So, I don't think the entire problem is in the optimization. Have you thought about how much a Nvidia card will benefit from Mantle?  If Nvidia released that they can achieve a 5% (or more) boost in Mantle, would you be for it? you think Nvidia would care if AMD's drivers are originally bad that now with Mantle, they are getting beaten?

 

Okay, now let's talk about the performance part. Also, apparently 5% boost isn't enough to actually care to do something? I have to wonder why Intel keeps releasing new chips that are pretty much more or less ~10% boost in the speed. Actually, why do both Nvidia and AMD keep releasing new cards if they're ~10% increases from the previous king? Thinking about this, I don't think you're argument makes much more sense. I'm pretty sure they care about that 5% increase (If it is ~5% increase on a Nvidia card from DX11 to Mantle). And this 5% increase can most likely affect all/most of their current cards.

Though, you can freeze your tech upgrades for ~5 graphic cards generations then get the ~%50 in performance that you've wanted to, though that performance boost would probably only apply on the current year of games, assuming games 5 years from now will be more demanding. Implementing for Mantle will probably just be a bonus, since DX12 most likely will feature the same (or better) kind of thing Mantle did....

Seems like you aren't able to understand the CPU/GPU logic at all. You know what a CPU does? Tells the GPU how a frame should look like, then the GPU renders and you get x fps. If the CPU doesnt do this fast enough it will bottleneck the GPU.

*CPU bottleneck -> when your GPU is below 99% load eg 80% -> here you will only benefit from a better CPU, overclocking your GPU will only reduce your load and not improve performance at all -> getting a new GPU (just to avoid confusion) from the same brand will give you the same FPS but at lower usage because the card is more powerful and can deliver the same fps at lower loads. Eg 270x to 290x in WoW where you always have ridiculous low loads in raids.

*GPU bottleneck -> when your GPU is all the time at 99% load it will perform at its best -> here you will only benefit from overclocking your GPU and you won't gain a thing overclocking your CPU at all -> you will gain performance from a better GPU but you won't gain performance from a better CPU at all

Got that? If you are comparing the 290x with a 980 with both of them at full load 99% then you're comparing the full potential of both cards. I'm not joining this, I don't even know how much faster a 290x is than a 980 or vice versa. I simply don't watch any benchmarks. I've only seen temperatures/noise levels/power consumptions from maxwell reviews, I ignored the rest.

Now to get to my point; AMD drivers are causing CPU overhead which means it might make the CPU the bottleneck or will make it worse. That's what I'm comparing. We know AMD DX drivers are causing this sometimes by a lot or a lil bit, Deathjester is claiming Gameworks is the cause of low AMD performance but the issue is on AMDs side as we saw the same issue in their own mantle titles. Just launch a game like Crysis 3, you'll see your GPU rocks at 99% then start streaming you'll notice the GPU load drops to lets say 50% because streaming eats a lot of CPU resources therefore making the CPU the bottleneck. Just see streaming here as the driver overhead, hope this explains it better.

You remember Nvidia advertising their new 337.50 being the wonder driver for CPU bottlenecking? They just reduced the overhead their dx drivers were causing. They were show-offing why they didnt see a 84% improvements Nvidia claimed because they tested it with the GPU being the bottleneck all the time, hence why 720p/low res showed a massive difference because the CPU was bottlenecking. I'll copypaste here what a nvidia guy said and for the love of god read this;

 

PCPER: Why do we see performance improvements in some sections of games but not in other sections (Skyrim for example)?

NVIDIA: For any given set of in-game settings, sections of the game may differ in terms of CPU versus GPU bottlenecks. Driver overhead reduction (one of the key improvements in 337.50) doesn’t help much if the area of the game is GPU limited. The CPU-limited segments of the game will benefit however.

We can also only reduce overhead in the driver. If the app is CPU-limited in the game itself, then it won’t see much improvement from an optimized driver.

PCPER: Why would SLI benefit more often and in higher numbers than single GPU configurations?

NVIDIA: SLI effectively doubles the available GPU horsepower, shifting the bottleneck from the GPU to the CPU. The net result is a more CPU-limited load. Consequently, our driver overhead optimizations are more likely to improve performance when SLI is enabled.

PCPER: Why do some games see more scaling with higher end CPUs than with lower end CPUs (this seems kind of counter-intuitive)?

NVIDIA: If a game (or portions of a game) are heavily CPU-bound in the game itself, rather than the driver code running on the CPU (which is the target of our 337.50 driver optimizations), then it’s possible for higher-end CPUs to show better scaling overall. But if the CPU-boundness is more driver-related, the 337.50 optimizations will show more scaling benefit on lower-end CPUs than higher-end CPUs. DX11 gives you the opportunity to tailor your optimizations to the issue you’re trying to solve. If you are dealing with a CPU bottleneck, your focus is on reducing overhead. However, with a scene that is facing a GPU bottleneck, your goal is to get the most GPU performance possible. Different games will require different strategies, and the effectiveness of those strategies will vary with different hardware configurations.

http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-33750-Driver-Analysis-Single-GPU-and-SLI-Tested/Single-GPU-Testing-and

Link to comment
Share on other sites

Link to post
Share on other sites

Now to get to my point; AMD drivers are causing CPU overhead which means it might make the CPU the bottleneck or will make it worse. That's what I'm comparing. We know AMD DX drivers are causing this sometimes by a lot or a lil bit, Deathjester is claiming Gameworks is the cause of low AMD performance but the issue is on AMDs side as we saw the same issue in their own mantle titles.

 

AMD drivers may well have more CPU overhead because of there hardware design and worse drivers. But gameworks main crippling feature is overuse of tesselation, which causes a GPU bottleneck.

Link to comment
Share on other sites

Link to post
Share on other sites

Plz Nvidia, I need them extra BF4 FPS.

CPU- 4690k @4.5ghz / 1.3v    Mobo- Asus Maximus VI Gene   RAM- 12GB GSkill Assorted 1600mhz   GPU- ASUS GTX 760 DCUII-OC 

Storage- 1TB 7200rpm WD Blue + Kingston SSDNow 240GB   PSU- Silverstone Strider ST75F-P

 

Link to comment
Share on other sites

Link to post
Share on other sites

snip

I don't mean to sound like a a-hole... but you are just trying too hard. Just let it go. Stop it.

You are acting like NVIDIA pre-DX12 announcement where they were claiming DX11 was just as good as Mantle, or "good enough"... they were trying to stall the progress because it didn't suit them. I mean they spent two or three months developing the driver wonder driver for StarSwarm (I can only imagine the work they had to put, man power and hours to make it run properly... a benchmark, not a game) when the Mantle implementation on Oxide engine was done by one man in some weeks.

NVIDIA have passed that point a long time ago.

After Microsoft announced the DX12 main feature was precisely what they took from Mantle, it was over. Even Khronos is doing the same.

It's over, deal with it. Not only Microsoft and Khronos want it - game developers are excited about this. Hell! Even Intel showed interest in this!

In all of these chunks of text and graphics you are posting, you don't even mention frame times once! Wich is one of the main goals of Mantle!

Why the hell would you try to struggle to try to make some ridiculous points that make no sense at all? Why are you fighting the progress, all because of a brand that even themselfs got over it? NVIDIA is supporting precisely what Microsoft took from Mantle to DX12! If they actually thought overhead was a myth, they would just say "We don't need DX12, we would just support DX11.3 because we can tackle driver overhead just as good as Mantle and DX12".

Link to comment
Share on other sites

Link to post
Share on other sites

If my 7950 and 480 can both have a Mantle thing... That would make me happy.

Main rig on profile

VAULT - File Server

Spoiler

Intel Core i5 11400 w/ Shadow Rock LP, 2x16GB SP GAMING 3200MHz CL16, ASUS PRIME Z590-A, 2x LSI 9211-8i, Fractal Define 7, 256GB Team MP33, 3x 6TB WD Red Pro (general storage), 3x 1TB Seagate Barracuda (dumping ground), 3x 8TB WD White-Label (Plex) (all 3 arrays in their respective Windows Parity storage spaces), Corsair RM750x, Windows 11 Education

Sleeper HP Pavilion A6137C

Spoiler

Intel Core i7 6700K @ 4.4GHz, 4x8GB G.SKILL Ares 1800MHz CL10, ASUS Z170M-E D3, 128GB Team MP33, 1TB Seagate Barracuda, 320GB Samsung Spinpoint (for video capture), MSI GTX 970 100ME, EVGA 650G1, Windows 10 Pro

Mac Mini (Late 2020)

Spoiler

Apple M1, 8GB RAM, 256GB, macOS Sonoma

Consoles: Softmodded 1.4 Xbox w/ 500GB HDD, Xbox 360 Elite 120GB Falcon, XB1X w/2TB MX500, Xbox Series X, PS1 1001, PS2 Slim 70000 w/ FreeMcBoot, PS4 Pro 7015B 1TB (retired), PS5 Digital, Nintendo Switch OLED, Nintendo Wii RVL-001 (black)

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia on the other hand likes to keep things very much locked down and proprietary even if it's bad for their users or the industry as a whole.

A bit like Apple. ;)

ON A 7 MONTH BREAK FROM THESE LTT FORUMS. WILL BE BACK ON NOVEMBER 5th.


Advisor in the 'Displays' Sub-forum | Sony Vegas Pro Enthusiast & Advisor


  Tech Tips Christian Fellowship Founder & Coordinator 

Link to comment
Share on other sites

Link to post
Share on other sites

So, AMD are disMantleing Mantle and letting everyone have a bite of the cherry, so to speak.

ON A 7 MONTH BREAK FROM THESE LTT FORUMS. WILL BE BACK ON NOVEMBER 5th.


Advisor in the 'Displays' Sub-forum | Sony Vegas Pro Enthusiast & Advisor


  Tech Tips Christian Fellowship Founder & Coordinator 

Link to comment
Share on other sites

Link to post
Share on other sites

lol apple is more open than nvidia. Look at opencl and llvm.

True! :)

ON A 7 MONTH BREAK FROM THESE LTT FORUMS. WILL BE BACK ON NOVEMBER 5th.


Advisor in the 'Displays' Sub-forum | Sony Vegas Pro Enthusiast & Advisor


  Tech Tips Christian Fellowship Founder & Coordinator 

Link to comment
Share on other sites

Link to post
Share on other sites

 

I don't mean to sound like a a-hole... but you are just trying too hard. Just let it go. Stop it.

I'm not trying hard at all, copypasting the facts isn't hard.

 

You are acting like NVIDIA pre-DX12 announcement where they were claiming DX11 was just as good as Mantle, or "good enough"... they were trying to stall the progress because it didn't suit them.

It's proven that Mantle currently is only a lil bit better than Nvidia DX11 in terms of reducing CPU overhead. Seems like you didn't read my posts at all, crashing in and assuming, Nvidia can only reduce their Driver overhead where as an API can make the game actually less CPU bound which Nvidia can't do just like that guy above from Nvidia said. Mantle is missing progress, that's all. Nvidia proved that DX11 could keep up with the current state of Mantle and proven that AMD's DX drivers can be completely garbage to a point that AMD is PR'ing that Gameworks is crippling their performance.

 

I mean they spent two or three months developing the driver wonder driver for StarSwarm (I can only imagine the work they had to put, man power and hours to make it run properly... a benchmark, not a game) when the Mantle implementation on Oxide engine was done by one man in some weeks.

How long more does AMD need to improve their overhead in their DX drivers? We passed 8 months already since 337.50 came out.

Mantle was released in March, 337.50 was released around 14 April which isn't 3 months - get your facts right. Do you have a source that Mantle's implementation in a benchmark was done by one man in some weeks?

 
 

After Microsoft announced the DX12 main feature was precisely what they took from Mantle, it was over. Even Khronos is doing the same.

Main feature? There's no proof that Mantle was improving multithreading, if you have provide it, DX12 however did with their demo. 
 

It's over, deal with it. Not only Microsoft and Khronos want it - game developers are excited about this. Hell! Even Intel showed interest in this!

 

They all were interested. What AMD is pushing now out with Mantle against Nvidia isn't really worth a hype.
 

 

In all of these chunks of text and graphics you are posting, you don't even mention frame times once! Wich is one of the main goals of Mantle!

This isn't making any sense, your frame times will be lower if your CPU is faster hence why Intel cpu's are doing much better than AMD CPU's. Its never been a main goal. Just a benefit of lowering the CPU overhead, that's all - just like your CPU consuming less power in that sense. 

be-4k.png

Noticing how many fluctuations you have, Nvidia is still doing it better than AMD. Yeah Mantle made a huge difference, /clap you needed a new API to achieve this. Just like with their dx overhead, needed a new API rather than fixing their dx overhead while every game out there except 6 are on directx. All what AMD achieved to do with Mantle, Nvidia did it WITHOUT mantle. Feel free to explain me why AMD isn't fixing their issues? Simple answer is; they don't care or they want to advertise Mantle.

Besides, frame latency tests were only there to find the irregular delays or microstuttering out, you'll likely won't notice the difference between 20 & 40 ms rendering times but seeing spikes to 150ms like 10 times in a min you'll notice it. You just want a nice straight line, not like 10-60ms bounces like 100 times in a sec. AMD has solved their issues a long time ago so it's irrelevant now.

 

Why the hell would you try to struggle to try to make some ridiculous points that make no sense at all? Why are you fighting the progress, all because of a brand that even themselfs got over it? NVIDIA is supporting precisely what Microsoft took from Mantle to DX12! If they actually thought overhead was a myth, they would just say "We don't need DX12, we would just support DX11.3 because we can tackle driver overhead just as good as Mantle and DX12".

You're not getting it do you, Nvidia can only reduce their own DX driver overhead, they can't do anything to DX itself. AMD can do this with Mantle. How awful DX11 can be has little to do with Nvidia. I'm not supporting DX11/DX12 or Nvidia at all, all I've been saying that the current state of Mantle reducing bottlenecks vs Nvidia DX11 is too small, I don't even have to say I don't want that 5% performance gain because Nvidia wouldn't add it anyways. If Mantle makes a difference of 50% then sure I'll vote for it but do we see that difference? 

Deathjester came up with Gameworks crippling AMD's performance, I'm simply pointing out that AMD cards are falling behind because of their DX drivers causing so much CPU overhead which has nothing to do with how powerful those cards are simply because they're held back by the CPU, if it wasn't there they would be better or slightly worse than Nvidia in Gameworks titles. You're not going to deny their DX drivers can cause much higher cpu bottlenecks unless you're a fanboy. Remembering when reviewers started testing frame times where AMD usually had massive microstutters/irregular delays that took them ~ 6 months(?) to fully fix it? We're not going to blame Nvidia Physx causing the high latencies in Batman when we had those high latencies in nearly all titles, that's what Deathjester is doing with his Gameworks crippling bs.

Link to comment
Share on other sites

Link to post
Share on other sites

DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD MANTLE IS SHIT AMD IS SHIT

If you have nothing else to say other than AMD's driver overhead then please stop. Drivers are volatile. They can be quickly and easily changed/updated and an updated drivers can literally be the difference between "make or break". So there's no use complaining something that might be changed, suppose tomorrow.

 

An API is something you cannot change that easily. Here is an educated explanation from Rottis at SemiAccurate Forums:

A comment for this thread from perspective of a person who's writing interfaces for other developers point of view.

If you are developing an interface, that is supposed to be fulfill needs of experienced professional users, you have to develop it with them using it.

To do so, you need to be experienced user of such interfaces to begin with.

First version needs to be good enough for obvious upcoming problems to be solved, so that it doesn't feel like wasting target users time.

At this point, you want them to start using the interface and react to immediate feed back well and fast. Find solutions to problems which solve their problems and take also care wider array of concerns.

Then you continue iterating and adding more users and improve the interface for all of them. These are compromises that require careful thought and work, they are not what any single usage pattern requires, but solution that's a compromise that hopefully benefit all of the users.

Then you do it as long as you can and try to keep everything simple and efficient.

When you release an interface (API), then you can't change it anymore. After this point, you need to live with what you have done. This is a big decision and nobody wants to rush this. It's a promise that user can trust the interface and no longer prepare for changes on it, at least not often.

Quote

The problem is that this is an nVidia product and scoring any nVidia product a "zero" is also highly predictive of the number of nVidia products the reviewer will receive for review in the future.

On 2015-01-28 at 5:24 PM, Victorious Secret said:

Only yours, you don't shitpost on the same level that we can, mainly because this thread is finally dead and should be locked.

On 2016-06-07 at 11:25 PM, patrickjp93 said:

I wasn't wrong. It's extremely rare that I am. I provided sources as well. Different devs can disagree. Further, we now have confirmed discrepancy from Twitter about he use of the pre-release 1080 driver in AMD's demo despite the release 1080 driver having been out a week prior.

On 2016-09-10 at 4:32 PM, Hikaru12 said:

You apparently haven't seen his responses to questions on YouTube. He is very condescending and aggressive in his comments with which there is little justification. He acts totally different in his videos. I don't necessarily care for this content style and there is nothing really unique about him or his channel. His endless dick jokes and toilet humor are annoying as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

I'm not trying hard at all, copypasting the facts isn't hard.

 

It's proven that Mantle currently is only a lil bit better than Nvidia DX11 in terms of reducing CPU overhead. Seems like you didn't read my posts at all, crashing in and assuming, Nvidia can only reduce their Driver overhead where as an API can make the game actually less CPU bound which Nvidia can't do just like that guy above from Nvidia said. Mantle is missing progress, that's all. Nvidia proved that DX11 could keep up with the current state of Mantle and proven that AMD's DX drivers can be completely garbage to a point that AMD is PR'ing that Gameworks is crippling their performance.

 

 

First of all, to say Mantle is "only a lil bit better than NVIDIA DX11 (what ever this means)" it's plain ridiculous. It's better. Period. The "lil bit" or the "huge bit" depends on the context where Mantle is being used: it depends on the hardware and the game being played.

Mantle is missing so much progress that while being in BETA it managed to get Microsoft to bring it to D3D12. What NVIDIA proved is that even with months of work they managed to make a benchmark get similar results to Mantle - yet I haven't seen them do the same with games.

If AMD PR about Gameworks is just a reflection of bad driver optimization, then why the hell NVIDIA bitched about the late delivery of TressFX source code on Tomb Rider? Is that bad/garbage driver optimization as well from NVIDIA? Or it was just PR about crippling performance?

Please... don't bring GameWorks to this, it will lead you to nowhere.

 

 

 

How long more does AMD need to improve their overhead in their DX drivers? We passed 8 months already since 337.50 came out.

Mantle was released in March, 337.50 was released around 14 April which isn't 3 months - get your facts right. Do you have a source that Mantle's implementation in a benchmark was done by one man in some weeks?

 

 

 

No, you need to get your facts right: StarSwarm release date: 30/01/2014 Source: http://store.steampowered.com/app/267130/ ,  want to check people comparing results with Mantle? Here you go: http://www.overclock.net/t/1463351/steam-star-swarm-benchmark

The 337.50 BETA was released on 7/04/2014.

Around two months. Like I said "two or three months".

Here is the source about Mantle implementation, where it's claimed that the engine was FULLY optimized and developed for D3D yet Mantle being easier "dwarfed the performance of D3D". Source: http://www.maximumpc.com/AMD_Mantle_Interview_2014

 

 

Main feature? There's no proof that Mantle was improving multithreading, if you have provide it, DX12 however did with their demo. 

 

 

Now this is a flag right here. I think you just don't know what are the Mantle features, neither what it does. So I'm not even going to comment something that you didn't even made an effort to inform yourself. Still, I will help you: http://en.wikipedia.org/wiki/Mantle_(API)

 

 

 

They all were interested. What AMD is pushing now out with Mantle against Nvidia isn't really worth a hype.

 

 

AMD is pushing so hard Mantle against NVIDIA that they even told NVIDIA they are free to make drivers for it. How nasty of AMD... allow competition to use their tech for free... shame on them.

 

 

 

This isn't making any sense, your frame times will be lower if your CPU is faster hence why Intel cpu's are doing much better than AMD CPU's. Its never been a main goal. Just a benefit of lowering the CPU overhead, that's all - just like your CPU consuming less power in that sense. 

Noticing how many fluctuations you have, Nvidia is still doing it better than AMD. 

 

Wrong - it was one of the main goals explained by Johan Andersson in several of his Mantle presentations, in fact was one of the main goals when he approached AMD. He even implemented such tracking on Frostbite Engine.

I don't know but I find funny you grab one graph, and you seem you can't even read it properly. I see no where NVIDIA doing it better then the R9 with Mantle in the whole experience - even if you read the article where you got such graphic that's evidenced SEVERAL TIMES lol... "the smoothness of Mantle" they say.

 

 

Yeah Mantle made a huge difference, /clap you needed a new API to achieve this. Just like with their dx overhead, needed a new API rather than fixing their dx overhead while every game out there except 6 are on directx. All what AMD achieved to do with Mantle, Nvidia did it WITHOUT mantle. Feel free to explain me why AMD isn't fixing their issues? Simple answer is; they don't care or they want to advertise Mantle.

Besides, frame latency tests were only there to find the irregular delays or microstuttering out, you'll likely won't notice the difference between 20 & 40 ms rendering times but seeing spikes to 150ms like 10 times in a min you'll notice it. You just want a nice straight line, not like 10-60ms bounces like 100 times in a sec. AMD has solved their issues a long time ago so it's irrelevant now.

Oh but you should /clap /clap /clap , a single /clap isn't enough because without Mantle you would end up with DX11.3, with the same old D3D. There is one thing you must accept: if Microsoft found that Mantle solution was better then THEIR OWN PROPRIETARY API, then it did something good. Way better then what D3D was doing.

Now you are start to lose your sense: the 6 games released with Mantle still support DX. But you choose to try to point out 6 games in the first year as something ridiculous - when all you did was making yourself look ridiculous: Mantle has more traction then DX10/DX11 had. Do you understand this? Mantle has more games supporting it then DX10/11 in the first year. I'm not even going to talk about the engines who support it, or the 100 developers using it... 6 games supported in one year is alot for a BETA API.

If I was mean I would talk about the traction Physx has in 8 years of existence - now that's ridiculous.

Then you somehow buy into the illusion that NVIDIA is doing the same has AMD with Mantle... So I won't bother to comment this.

As far as I know AMD still supports DX11.2. Look at Shadows Of Mordor @ 4k benchmarks where the one year old R9 290 and 290X win to the brand new architecture 980 and 970.

 

 

You're not getting it do you, Nvidia can only reduce their own DX driver overhead, they can't do anything to DX itself. AMD can do this with Mantle. How awful DX11 can be has little to do with Nvidia. I'm not supporting DX11/DX12 or Nvidia at all, all I've been saying that the current state of Mantle reducing bottlenecks vs Nvidia DX11 is too small, I don't even have to say I don't want that 5% performance gain because Nvidia wouldn't add it anyways. If Mantle makes a difference of 50% then sure I'll vote for it but do we see that difference? 

Deathjester came up with Gameworks crippling AMD's performance, I'm simply pointing out that AMD cards are falling behind because of their DX drivers causing so much CPU overhead which has nothing to do with how powerful those cards are simply because they're held back by the CPU, if it wasn't there they would be better or slightly worse than Nvidia in Gameworks titles. You're not going to deny their DX drivers can cause much higher cpu bottlenecks unless you're a fanboy. Remembering when reviewers started testing frame times where AMD usually had massive microstutters/irregular delays that took them ~ 6 months(?) to fully fix it? We're not going to blame Nvidia Physx causing the high latencies in Batman when we had those high latencies in nearly all titles, that's what Deathjester is doing with his Gameworks crippling bs.

 

 

 

Oh I get it - you somehow don't appreciate what Mantle brought, probably because it was brought by AMD instead of NVIDIA. Why? I have no idea. It only brought good things to the industry: it didn't cripple anyones performance, Microsoft and Khronos were excited by it, developers are excited about it... everyone except NVIDIA who claimed "we don't see the benefits of Mantle"... but they see Mantle benefits in DirectX12.

 

Yes we see such difference - what you seem to lack is the understanding how Mantle works, where it shines, and why it's good to the point other partys want to use it and learn from it. To claim that Mantle VS DX11 NVIDIA (again what ever this means) is 5% it's just plain ridiculous. This is why I haven't made an effort to talk to you before, because I just felt you didn't know what you were talking about. But the missinformation is so great I had to say something.

I don't have to deny or accept anything - the results are outthere, and anyone can see it from themselfs - you have games where AMD shines with DX others where NVIDIA shines with DX. Yes I do remember how long it took for them to fix it, and currently I find XDMA and XDMA with Mantle a superior alternetive to SLI - but this is my oppinion.

Again I wont comment on Gameworks because it's off-topic, but I also find GW did more harm then good - on NVIDIA, Intel and AMD hardware. 

Link to comment
Share on other sites

Link to post
Share on other sites

If you have nothing else to say other than AMD's driver overhead then please stop. Drivers are volatile. They can be quickly and easily changed/updated and an updated drivers can literally be the difference between "make or break". So there's no use complaining something that might be changed, suppose tomorrow.

 

An API is something you cannot change that easily. Here is an educated explanation from Rottis at SemiAccurate Forums:

Where am I saying an API is easy to change? Where did I say AMD/Mantle is shit? 

Sorry for repeating it but if people aren't willing to drop their fanboyism rather than showing their ignorance, then I wouldn't need to repeat the same thing over and over. I can understand being disappointed after a year when someone is dropping some proof you can't deny that Mantle is currently nothing more or less a thing to port games easier. Seen you upvoting blatant misinformation favouring AMD, wouldn't be surprised that even from the lighest criticism AMD gets you'll be offending people.

Link to comment
Share on other sites

Link to post
Share on other sites

Where am I saying an API is easy to change? Where did I say AMD/Mantle is shit?

DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD DRIVER OVERHEAD

You happy? This isn't the thread to discuss driver overheads.

 

Sorry for repeating it but if people aren't willing to drop their fanboyism rather than showing their ignorance, then I wouldn't need to repeat the same thing over and over.

Why aren't people reporting this post for personal attacks/insults?

 

Sorry for repeating it but if people aren't willing to drop their fanboyism rather than showing their ignorance, then I wouldn't need to repeat the same thing over and over. I can understand being disappointed after a year when someone is dropping some proof you can't deny that Mantle is currently nothing more or less a thing to port games easier.

Where am I saying an API is easy to change? Where did I say AMD/Mantle is shit?

Now do I need to find where did you EXACTLY said "shit"? I might have exaggerated a bit, but isn't that implied in your reply?

 

I can understand being disappointed after a year when someone is dropping some proof you can't deny that Mantle is currently nothing more or less a thing to port games easier.

Then why does mantle do this? WHY???

http://semiaccurate.com/2014/10/28/look-civilization-beyond-earth/

Mantle-Civ.png

 

Seen you upvoting blatant misinformation favouring AMD, wouldn't be surprised that even from the lighest criticism AMD gets you'll be offending people.

Really? You stalk people and find who likes what? You like being a stalker? And then you ATTACK PEOPLE BASED ON THEIR LIKES?????

Give me some "blatant misinformation favouring AMD" and prove it's a misinformation. And tell me exactly where I offended you. I told you to "please stop" SPECIFICALLY BECAUSE YOU ARE DERAILING THE THREAD!!! THIS THREAD IS NOT FOR DISCUSSING DRIVER OVERHEAD!!! Come on man!

 

I'm reporting your post.

Quote

The problem is that this is an nVidia product and scoring any nVidia product a "zero" is also highly predictive of the number of nVidia products the reviewer will receive for review in the future.

On 2015-01-28 at 5:24 PM, Victorious Secret said:

Only yours, you don't shitpost on the same level that we can, mainly because this thread is finally dead and should be locked.

On 2016-06-07 at 11:25 PM, patrickjp93 said:

I wasn't wrong. It's extremely rare that I am. I provided sources as well. Different devs can disagree. Further, we now have confirmed discrepancy from Twitter about he use of the pre-release 1080 driver in AMD's demo despite the release 1080 driver having been out a week prior.

On 2016-09-10 at 4:32 PM, Hikaru12 said:

You apparently haven't seen his responses to questions on YouTube. He is very condescending and aggressive in his comments with which there is little justification. He acts totally different in his videos. I don't necessarily care for this content style and there is nothing really unique about him or his channel. His endless dick jokes and toilet humor are annoying as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×