Jump to content

Project CARS devs address AMD performance issues, AMD drivers to blame entirely, PhysX runs on CPU only, no GPU involvement whatsoever.

neither company is perfect.

Yup.

But while Nvidia has had a great strategy, to not only deliver hardware, but also software and to make Nvidia something more than just GPUs, AMD hasn't really done anything. 1 new GPU every year, the others rebrands.New APUs, but the APU market isn't exactly part of the gaming market, a market which AMD are aiming at.No CPUs.TressFX...?

Dunno what to say about Mantle. It was to hide a defect in their performance after all...and it didn't really make a change in that performance. However, it might have just been a cause for why DX12 is what it is/will be...or DX12 has been in development for a long time now and Mantle didn't influence it at all.

 

At least they make some competition due to their lower prices of GPUs compared to Nvidia's offerings w/ ~ equal performance.

 

AMD has created quite a dirty fight. Nvidia didn't get into it though...which is nice to see.

 

It's not like Nvidia is a saint either, they are , after all, a company and their purpose is to make money.

i5 4670k @ 4.2GHz (Coolermaster Hyper 212 Evo); ASrock Z87 EXTREME4; 8GB Kingston HyperX Beast DDR3 RAM @ 2133MHz; Asus DirectCU GTX 560; Super Flower Golden King 550 Platinum PSU;1TB Seagate Barracuda;Corsair 200r case. 

Link to comment
Share on other sites

Link to post
Share on other sites

Yup.

But while Nvidia has had a great strategy, to not only deliver hardware, but also software and to make Nvidia something more than just GPUs, AMD hasn't really done anything. 1 new GPU every year, the others rebrands.New APUs, but the APU market isn't exactly part of the gaming market, a market which AMD are aiming at.No CPUs.TressFX...?

Dunno what to say about Mantle. It was to hide a defect in their performance after all...and it didn't really make a change in that performance. However, it might have just been a cause for why DX12 is what it is/will be...or DX12 has been in development for a long time now and Mantle didn't influence it at all.

 

At least they make some competition due to their lower prices of GPUs compared to Nvidia's offerings w/ ~ equal performance.

 

AMD has created quite a dirty fight. Nvidia didn't get into it though...which is nice to see.

what do you mean by dirty fight?

My posts are in a constant state of editing :)

CPU: i7-4790k @ 4.7Ghz MOBO: ASUS ROG Maximums VII Hero  GPU: Asus GTX 780ti Directcu ii SLI RAM: 16GB Corsair Vengeance PSU: Corsair AX860 Case: Corsair 450D Storage: Samsung 840 EVO 250 GB, WD Black 1TB Cooling: Corsair H100i with Noctua fans Monitor: ASUS ROG Swift

laptop

Some ASUS model. Has a GT 550M, i7-2630QM, 4GB or ram and a WD Black SSD/HDD drive. MacBook Pro 13" base model
Apple stuff from over the years
iPhone 5 64GB, iPad air 128GB, iPod Touch 32GB 3rd Gen and an iPod nano 4GB 3rd Gen. Both the touch and nano are working perfectly as far as I can tell :)
Link to comment
Share on other sites

Link to post
Share on other sites

Yes I'm talking about AMD, Nvidia tried to help AMD several times by giving them CUDA for free, thus allowing AMD to run Nvidia features on their card which would have lead to more direct competition. And don't forget that Intel is licensed to use CUDA as well, which would really help their iGPU. And PhysX allows significantly better hair effects then "TressFX', Microsoft has been working on DX 12 for years-AMD didn't push them to do anything, and honestly you don't know about the high CPU overhead with AMD's current graphics drivers do you? Because Mantle was only a way for AMD to cover their ass and make people think that their API was good, when in reality it was barely better than Nvidia's DirectX 11.

 

Of course they did, and there was zero reason for AMD to say no, right? Obviously there is a lot more to that story, than what we know. come on don't be so naïve.

 

Wait, so now PhysX is no longer just a Physics engine? Sorry, but HairWorks is part of VisualFX, not PhysX. TressFX looks equally as good, if not better, based on what I've seen of HairFX so far (but honestly they look very similar). With TressFX 3, in Deus Ex Mankind DIvided, it should be efficient enough to run on all, including NPC's, without a freaking Titan X.

 

Yes I know about the CPU overhead, but what we don't know, is why that is. There must be a reason, why AMD can destroy NVidia in draw calls in DX12 for instance.

 

Mantle was good enough for Kronos to assimilate it into Vulkan. so it can't be that bad.

 

And Freesync was just a way to kick back at Nvidia.

 

Seriously, company A releases a product with a brand new idea, and a few months later, their competitor releases a product that does almost the same thing and adds "Free" into its name(even though it's not free). That's just...wrong.

 

And they didn't really do anything either. They just built a way for their GPUs to communicate with a part of the monitor which can offer adaptive sync, a feature which has been built in into eDP for quite some time now. Then they just pushed that adaptive sync to become a VESA standard for all monitors, which it did, but it became just an optional standard.

 

Adaptive Sync would never have existed, if NVidia had made Gsync an industry standard. What did you expect? Also we don't know how much AMD had worked on that on laptops already.

 

The rest is incorrect. eDP did not support Adaptive Sync. AMD is responsible for the Adaptive Sync standard, and has worked with both monitor vendors and scaler/monitor controller vendors, to make it happen. Both Adaptive Sync, and Gsync uses Variable VBlank technology from eDP, so if you criticize AMD for using that tech, then you're criticizing NVidia by proxy.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

what do you mean by dirty fight?

We have 47 pages on this thread just about it...

 

Complaining that Nvidia makes game run worse on AMD GPU-s.

The Gameworks controversy.

Freesync. Not exactly the concept itself, but rather the naming scheme.

etc.

i5 4670k @ 4.2GHz (Coolermaster Hyper 212 Evo); ASrock Z87 EXTREME4; 8GB Kingston HyperX Beast DDR3 RAM @ 2133MHz; Asus DirectCU GTX 560; Super Flower Golden King 550 Platinum PSU;1TB Seagate Barracuda;Corsair 200r case. 

Link to comment
Share on other sites

Link to post
Share on other sites

Of course they did, and there was zero reason for AMD to say no, right? Obviously there is a lot more to that story, than what we know. come on don't be so naïve.

 

Wait, so now PhysX is no longer just a Physics engine? Sorry, but HairWorks is part of VisualFX, not PhysX. TressFX looks equally as good, if not better, based on what I've seen of HairFX so far (but honestly they look very similar). With TressFX 3, in Deus Ex Mankind DIvided, it should be efficient enough to run on all, including NPC's, without a freaking Titan X.

 

Yes I know about the CPU overhead, but what we don't know, is why that is. There must be a reason, why AMD can destroy NVidia in draw calls in DX12 for instance.

 

Mantle was good enough for Kronos to assimilate it into Vulkan. so it can't be that bad.

 

 

Adaptive Sync would never have existed, if NVidia had made Gsync an industry standard. What did you expect? Also we don't know how much AMD had worked on that on laptops already.

 

The rest is incorrect. eDP did not support Adaptive Sync. AMD is responsible for the Adaptive Sync standard, and has worked with both monitor vendors and scaler/monitor controller vendors, to make it happen. Both Adaptive Sync, and Gsync uses Variable VBlank technology from eDP, so if you criticize AMD for using that tech, then you're criticizing NVidia by proxy.

Yeah. Mantle wasn't a bad thing after all.

 

Also , I doubt AMD had much to do with eDP. Do you see AMD laptops out there? They're very few.Still, you never know.

 

eDP's variable refresh rate represented the base of Adaptive Sync. Following the same logic, they applied it to monitors, which of course required a different implementation.

Freesync did bring a standard, which is definitely good. But the main purpose of it was , and is, to compete against GSync. You can understand that from its name after all.

i5 4670k @ 4.2GHz (Coolermaster Hyper 212 Evo); ASrock Z87 EXTREME4; 8GB Kingston HyperX Beast DDR3 RAM @ 2133MHz; Asus DirectCU GTX 560; Super Flower Golden King 550 Platinum PSU;1TB Seagate Barracuda;Corsair 200r case. 

Link to comment
Share on other sites

Link to post
Share on other sites

We have 47 pages on this thread just about it...

 

Complaining that Nvidia makes game run worse on AMD GPU-s.

Freesync.

I figured that's what you meant I just wanted to confirm.

My posts are in a constant state of editing :)

CPU: i7-4790k @ 4.7Ghz MOBO: ASUS ROG Maximums VII Hero  GPU: Asus GTX 780ti Directcu ii SLI RAM: 16GB Corsair Vengeance PSU: Corsair AX860 Case: Corsair 450D Storage: Samsung 840 EVO 250 GB, WD Black 1TB Cooling: Corsair H100i with Noctua fans Monitor: ASUS ROG Swift

laptop

Some ASUS model. Has a GT 550M, i7-2630QM, 4GB or ram and a WD Black SSD/HDD drive. MacBook Pro 13" base model
Apple stuff from over the years
iPhone 5 64GB, iPad air 128GB, iPod Touch 32GB 3rd Gen and an iPod nano 4GB 3rd Gen. Both the touch and nano are working perfectly as far as I can tell :)
Link to comment
Share on other sites

Link to post
Share on other sites

Yeah. Mantle wasn't a bad thing after all.

 

Also , I doubt AMD had much to do with eDP. Do you see AMD laptops out there? They're very few.Still, you never know.

 

eDP's variable refresh rate represented the base of Adaptive Sync. Following the same logic, they applied it to monitors, which of course required a different implementation.

Freesync did bring a standard, which is definitely good. But the main purpose of it was , and is, to compete against GSync. You can understand that from its name after all.

 

Well AMD, NVidia and Intel are all member of VESA, so I believe they all had a big part to play in eDP. Maybe Intel and AMD mostly, as they make the vast majority of laptop graphics cards (integrated). There are plenty of AMD laptops out there, but they belong in the lower end, mostly due to 28nm and low IPC I guess. That might change if they release a laptop ZEN.

 

Sure Freesync and Adaptive Sync was a counter to Gsync. That goes without saying. The point is, that NVidia left them no choice, as Gsync was not made an industry standard. I've given NVidia kudos for inventing gsync, as it is a really good tech, functionality wise. But NVidia wanted everything proprietary, so AMD had to make their own version.

 

Also Freesync, is free on the monitors, that does not charge a price premium, like the LG 34UM67 vs 34UM65 (actually the freesync 67 version, is cheaper than the non freesync version 65 some places). It really comes down to the monitor vendor. Either way, AMD's point was that is was royalty free.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

@ I was unaware nVidia offered CUDA to AMD for free. is that verified somewhere? I'd like to read up on that.

Link to comment
Share on other sites

Link to post
Share on other sites

Relevant to this subject: 

 

I assume you wanted this particular time stamp (30mins & 20 seconds in)

 

:edit:

was experiencing link fuckery, sorry for edit.  And i now see that you did want it time stamped at 30m 10s.  Don't know why it didn't work for me.  effing flash.

Link to comment
Share on other sites

Link to post
Share on other sites

@ I was unaware nVidia offered CUDA to AMD for free. is that verified somewhere? I'd like to read up on that.

 

http://www.extremetech.com/computing/82264-why-wont-ati-support-cuda-and-physx

 

search for my posts in this thread and you will find nearly all the links to articles that claim AMD was offered physx and CUDA.

 

What's happening here really is a case of AMD not having the cash to be able to compete, I am pretty confident to say that everybody in this thread doesn't like that. We all want healthy competition and we all want AMD to survive.  However there are varying degrees of disgust with AMD at the moment for resorting to little more than mud throwing because it appears they can't afford to keep up and in all honesty we don't know why they won't enable cuda.  It was free once and even if there was a cost to implement it now, it wouldn't be that much, I am sure consumers would pay a few dollars (exaggerated cost) more per GPU to have full unhindered performance on CUDA based software.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

@ I was unaware nVidia offered CUDA to AMD for free. is that verified somewhere? I'd like to read up on that.

 

http://www.extremetech.com/computing/82264-why-wont-ati-support-cuda-and-physx

 

http://www.techpowerup.com/64787/radeon-physx-creator-nvidia-offered-to-help-us-expected-more-from-amd.html

 

Not only did Nvidia offer AMD the ability to license CUDA (thus PhysX, and eventually Gameworks) but Nvidia actually wanted to help out the people who got CUDA going on AMD cards, AMD however said nope. 

Not the first time Nvidia has offered up CUDA. Intel even took the bait and got the license to do whatever they want with it (within the framework of their agreement)

 

 

 

If open standards are so important, why partner with Havok for physics work? That technology is far from open; it’s owned by Intel, the other chief competitor of AMD/ATI. Of course, there are no truly open physics middleware solutions on the market with any traction, so that point might be kind of moot. 

 

Keosheyan says, “We chose Havok for a couple of reasons. One, we feel Havok’s technology is superior. Two, they have demonstrated that they’ll be very open and collaborative with us, working together with us to provide great solutions. It really is a case of a company acting very indepently from their parent company. Three, today on PCs physics almost always runs on the CPU, and we need to make sure that’s an optimal solution first.” Nvidia, he says, has not shown that they would be an open and truly collaborative partner when it comes to PhsyX. The same goes for CUDA, for that matter.

 

 

Hubris? Too stubborn? 

 

Havok is as closed off as PhysX, only Havok is old and outdated and needs to be taken out to the back 40 and buried. 

Link to comment
Share on other sites

Link to post
Share on other sites

I see free access and opportunity to create a driver for CUDA, but not a license agreement. Nor a guarantee that investment in time and resources would be guaranteed or defensible if nVidia changed its mind. I don't dispute they had access, and SHOULD have made use of it, even if it would have only been tenable for the short term. But, I can see why AMD would have had misgivings over investing so much of their resources in an avenue, that in the modern litigious landscape, looks like a plate of "free" cookies in the middle of a bear trap.

 

I do think both sides should come to the table in earnest and hash out some hard and fast agreements and push beyond these inane pseudo-competitions and go back to direct competition.

Link to comment
Share on other sites

Link to post
Share on other sites

And what company might that be? Surely you're not talking about the company, bringing the next gen Vram, aka. HBM to market, or is the main driver behind, the next gens (yes plural) of graphics API's, in DX12 and to a very large extent, Vulkan, along with the first physics hair ever in TressFX?

 

AMD did shit to provoke DX12.  Nvidia has worked on DX12 for the past 5 years with Microsoft.  HBM does nothing since the vast majority of times your GPU bottleneck is not your memory bandwidth anyway.

QUOTE ME IN A REPLY SO I CAN SEE THE NOTIFICATION!

When there is no danger of failure there is no pleasure in success.

Link to comment
Share on other sites

Link to post
Share on other sites

Slightly Mad Studios giving a more detailed explanation on how their game is made, says PjC is NOT a GameWorks game etc etc...

 

http://www.pcgamer.com/project-cars-studio-denies-it-intentionally-crippled-performance-on-amd-cards/

“I like being alone. I have control over my own shit. Therefore, in order to win me over, your presence has to feel better than my solitude. You're not competing with another person, you are competing with my comfort zones.”  - portfolio - twitter - instagram - youtube

Link to comment
Share on other sites

Link to post
Share on other sites

And what company might that be? Surely you're not talking about the company, bringing the next gen Vram, aka. HBM to market, or is the main driver behind, the next gens (yes plural) of graphics API's, in DX12 and to a very large extent, Vulkan, along with the first physics hair ever in TressFX?

Oh FFS are you joking? Nvidia, Intel, and Micron established the HMC consortium LOOOONG before AMD teamed up with Hynix to make HBM. Micron just decided to not go the 2.5D route and Hynix will make it to market first with an inferior product to HMC which arrives with Knight's Landing in Q1 2016. Also, DX 12 was in development long before Mantle. At best AMD can get credit for Vulkan, though Intel's also done most of the work for Linux and FreeBSD compliance with it. And TressFX is n it the first hair library. It may be the first time someone packaged all such functions together under a label, but Nvidia and AMD both had hair-generation functions built into DX and OpenGL long before TressFX.

Memory is also not the bottleneck in games. In scientific computing both it and the PCIe bus are the biggest bottlenecks right now, and AMD does need a win there, but to have that win it should have worked to make Fiji PCIe 4.0 compliant, because Knight's Landing will be PCIe 4.0 and Skylake E/EP/EX will bring PCIe 4.0 to motherboards. AMD also should have aimed for Zen to be PCIe 4.0 as well, but it didn't.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Oh FFS are you joking? Nvidia, Intel, and Micron established the HMC consortium LOOOONG before AMD teamed up with Hynix to make HBM. Micron just decided to not go the 2.5D route and Hynix will make it to market first with an inferior product to HMC while arrives with Knoght's Landing in Q1 2016. Also, DX 12 was in development long before Mantle. At best AMD can get credit for Vulkan, though Intel's also done most of the work for Linux and FreeBSD compliance with it. And TressFX is n it the first hair library. It may be the first time someone packaged all such functions together under a label, but Nvidia and AMD both had hair-generation functions built into DX and OpenGL long before TressFX.

Just give up on the idiot, he wouldn't read all of the information presented in the other thread.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Just give up on the idiot, he wouldn't read all of the information presented in the other thread.

 

This is the same person that says the 295x2 is an engineering failure while keeping both hawaii cores at under 60 C.

Link to comment
Share on other sites

Link to post
Share on other sites

This is the same person that says the 295x2 is an engineering failure while keeping both hawaii cores at under 60 C.

That's more to do with the liquid cooling-the low temps that is. And cooling a dual GPU card has always been a challenge anyway, on Nvidia and AMD's side. I think of Notional as a wandering troll/fanboy-since he won't read any evidence and posted that 'hairworks in the witcher 3' image.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

This is the same person that says the 295x2 is an engineering failure while keeping both hawaii cores at under 60 C.

 

The 295x2 needs water to do what the Titan Z does on air. That is an engineering failure. Not everyone can use water cooling, especially in very small cases. For example, Linus could've put a Titan Z into their latest build just fine and gotten a lot of CUDA acceleration for CUDA accelerated applications like Premier Pro, without really needing to sacrifice anything else in that compact case. 

Put a 295x2 into a case like that. You kinda can't... 

Link to comment
Share on other sites

Link to post
Share on other sites

Oh FFS are you joking? Nvidia, Intel, and Micron established the HMC consortium LOOOONG before AMD teamed up with Hynix to make HBM. Micron just decided to not go the 2.5D route and Hynix will make it to market first with an inferior product to HMC which arrives with Knight's Landing in Q1 2016. Also, DX 12 was in development long before Mantle. At best AMD can get credit for Vulkan, though Intel's also done most of the work for Linux and FreeBSD compliance with it. And TressFX is n it the first hair library. It may be the first time someone packaged all such functions together under a label, but Nvidia and AMD both had hair-generation functions built into DX and OpenGL long before TressFX.

Memory is also not the bottleneck in games. In scientific computing both it and the PCIe bus are the biggest bottlenecks right now, and AMD does need a win there, but to have that win it should have worked to make Fiji PCIe 4.0 compliant, because Knight's Landing will be PCIe 4.0 and Skylake E/EP/EX will bring PCIe 4.0 to motherboards. AMD also should have aimed for Zen to be PCIe 4.0 as well, but it didn't.

 

I'm sorry, but where is the products from HMC consortium? I'm specifically talking about HBM, and that AMD and SK Hynix, made that tech, and prototyping together. Why are you belittling that? And why exactly do you believe HMC to be better?

 

You don't know when AMD started making mantle, but I acknowledge that DX12 has been in development for many years (5 years sounds about right). But do you honestly think AMD has had no hand in the development of DX12 what so ever? Not even when making XBONE with MS back before 2012?

 

As for bandwidth, agreed, it's not an issue right now, but you are missing other points of HBM, like lower power usage, which means less voltage regulators, and generally less footprint on the PCB, making it possible to make graphics cards a lot smaller and with fever traces in the PCB.

 

Just give up on the idiot, he wouldn't read all of the information presented in the other thread.

 

Please drop the ad hominem, it's not only childish but also against the COC. But by all means, do enlighten me: What have I not read? (There's many threads now, hard to keep track)

 

That's more to do with the liquid cooling-the low temps that is. And cooling a dual GPU card has always been a challenge anyway, on Nvidia and AMD's side. I think of Notional as a wandering troll/fanboy-since he won't read any evidence and posted that 'hairworks in the witcher 3' image.

 

Exactly, which is why it was such a fitting solution, to use an AIO on that card. Titan Z was a triple slot card, that thermo throttled. Had it used the same cooling solution as the 295x2, and been a lot cheaper, it would have been a good card.

 

Again an ad hominem attack. I guess that is what people turn to, when they no longer have any arguments.

 

The 295x2 needs water to do what the Titan Z does on air. That is an engineering failure. Not everyone can use water cooling, especially in very small cases. For example, Linus could've put a Titan Z into their latest build just fine and gotten a lot of CUDA acceleration for CUDA accelerated applications like Premier Pro, without really needing to sacrifice anything else in that compact case. 

Put a 295x2 into a case like that. You kinda can't... 

 

Titan Z thermo throttled, and was factory underclocked, which is the exact opposite of the 295x2. Now THAT is an engineering failure. Noone buys such an expensive card, if they cannot use it in their case. And in that specific build, they used an AIO cooler on the CPU, so yeah, they could have made room for the 295x2, and using an aircooler on the CPU. I don't get your last sentence. You most certainly can put that card in that case.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Exactly, which is why it was such a fitting solution, to use an AIO on that card. Titan Z was a triple slot card, that thermo throttled. Had it used the same cooling solution as the 295x2, and been a lot cheaper, it would have been a good card.

 

 

Titan Z thermo throttled, and was factory underclocked, which is the exact opposite of the 295x2. Now THAT is an engineering failure. Noone buys such an expensive card, if they cannot use it in their case. And in that specific build, they used an AIO cooler on the CPU, so yeah, they could have made room for the 295x2, and using an aircooler on the CPU. I don't get your last sentence. You most certainly can put that card in that case.

did you actually test those things or did you just made that up? also

you have a RM 650......

 

Spoiler
Spoiler

AMD 5000 Series Ryzen 7 5800X| MSI MAG X570 Tomahawk WiFi | G.SKILL Trident Z RGB 32GB (2 * 16GB) DDR4 3200MHz CL16-18-18-38 | Asus GeForce GTX 3080Ti STRIX | SAMSUNG 980 PRO 500GB PCIe NVMe Gen4 SSD M.2 + Samsung 970 EVO Plus 1TB PCIe NVMe M.2 (2280) Gen3 | Cooler Master V850 Gold V2 Modular | Corsair iCUE H115i RGB Pro XT | Cooler Master Box MB511 | ASUS TUF Gaming VG259Q Gaming Monitor 144Hz, 1ms, IPS, G-Sync | Logitech G 304 Lightspeed | Logitech G213 Gaming Keyboard |

PCPartPicker 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm sorry, but where is the products from HMC consortium? I'm specifically talking about HBM, and that AMD and SK Hynix, made that tech, and prototyping together. Why are you belittling that? And why exactly do you believe HMC to be better?

 

You don't know when AMD started making mantle, but I acknowledge that DX12 has been in development for many years (5 years sounds about right). But do you honestly think AMD has had no hand in the development of DX12 what so ever? Not even when making XBONE with MS back before 2012?

 

As for bandwidth, agreed, it's not an issue right now, but you are missing other points of HBM, like lower power usage, which means less voltage regulators, and generally less footprint on the PCB, making it possible to make graphics cards a lot smaller and with fever traces in the PCB.

 

 

Please drop the ad hominem, it's not only childish but also against the COC. But by all means, do enlighten me: What have I not read? (There's many threads now, hard to keep track)

 

 

Exactly, which is why it was such a fitting solution, to use an AIO on that card. Titan Z was a triple slot card, that thermo throttled. Had it used the same cooling solution as the 295x2, and been a lot cheaper, it would have been a good card.

 

Again an ad hominem attack. I guess that is what people turn to, when they no longer have any arguments.

 

 

Titan Z thermo throttled, and was factory underclocked, which is the exact opposite of the 295x2. Now THAT is an engineering failure. Noone buys such an expensive card, if they cannot use it in their case. And in that specific build, they used an AIO cooler on the CPU, so yeah, they could have made room for the 295x2, and using an aircooler on the CPU. I don't get your last sentence. You most certainly can put that card in that case.

An ad hominem (Latin for "to the man" or "to the person"[1]), short for argumentum ad hominem, means responding to arguments by attacking a person's character, rather than to the content of their arguments. When used inappropriately, it is a fallacy in which a claim or argument is dismissed on the basis of some irrelevant fact or supposition about the author or the person being criticized.[2] Ad hominem reasoning is not always fallacious, for example, when it relates to the credibility of statements of fact or when used in certain kinds of moral and practical reasoning.[3]

I'm well within reason to say what I've been saying. You have no credibility what so ever as in all of the discussions I've seen, you have used only a single source, plus there was that attempt at a meme with the witcher 3 hairworks. I however have used a wide range of sources that all back up my points. You just can't be bothered to read them, kind of like the way you can't be bothered to find-much less read- more than a single source.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

BTW, this is what I mean by the attempts at a meme:

 

Lol, I found a picture of a Witcher 3 bear with HairWorks:

 

hairless-bears.jpg

 

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

did you actually test those things or did you just made that up? also

you have a RM 650......

 

How is the fact that he has a RM 650 relevant to the discussion.

 

Since we are throwing random crap at the wall , I like carrots.

 

Must be very relevant to the issue at hand.

Link to comment
Share on other sites

Link to post
Share on other sites

The 295x2 needs water to do what the Titan Z does on air. That is an engineering failure. Not everyone can use water cooling, especially in very small cases. For example, Linus could've put a Titan Z into their latest build just fine and gotten a lot of CUDA acceleration for CUDA accelerated applications like Premier Pro, without really needing to sacrifice anything else in that compact case. 

Put a 295x2 into a case like that. You kinda can't... 

I'm sorry but everything in the above quote is inaccurate and grossly skewed.

 

firstly, dual air-cooled 290xs

http://www.newegg.com/Product/Product.aspx?Item=N82E16814131584

 

premiere pro wasn't the best choice to prove that cuda was better

http://www.dslrfilmnoob.com/2014/04/26/opencl-vs-cuda-adobe-premiere-cc-rendering-test/

 

this is a r9 295x2 in a minitix case

http://linustechtips.com/main/topic/174290-best-mini-itx-case-for-an-r9-295x2/

 

or you can go through this gigantic thread about the r9 295x2 and stop spreadig misinformation.

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×