Jump to content

[Updated] Oxide responds to AotS Conspiracies, Maxwell Has No Native Support For DX12 Asynchronous Compute

So, Kepler can't do Tessellation that well, and Maxwell can. NVidia turned up Tessellation up to 11 with gameworks.

 

Maxwell can't do Asynchronous Compute and Pascal (presumably) can.... What's stopping them from focusing game development on A-Sync Compute in the next year and a half?

Ensure a job for life: https://github.com/Droogans/unmaintainable-code

Actual comment I found in legacy code: // WARNING! SQL injection here!

Link to comment
Share on other sites

Link to post
Share on other sites

So, Kepler can't do Tessellation that well, and Maxwell can. NVidia turned up Tessellation up to 11 with gameworks.

 

Maxwell can't do Asynchronous Compute and Pascal (presumably) can.... What's stopping them from focusing game development on A-Sync Compute in the next year and a half?

They want people to buy their cards and really just force them to - AMD put everything into GCN 1.2 - Tessellation, A-sync compute, and more whereas Nvidia likely anticipated the outcome and planned so that their cards get a shorter lifespam and the enthusiasts are forced to upgrade.

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

Did you look at his second-most recent post, about his time for only 10-minute breaks? That's enough to compose a post or two, but probably not enough to dig through benchmarks from months ago. Excuse or not, he's not so petty that he devotes more time on a tech forum than his college education. And I wouldn't say I idolize him, but I like him and his posts because they make sense, usually more than the posts of the person he's arguing with.

 

But I would call your premature exclamations that you've won the argument and how terrible it is that a person with a college education has lost to someone who got a sub-3.0 GPA in high school railing on him, when he hasn't responded for the aforementioned reason. You seem overly eager to prove him wrong, as if it really bugs you that he might be better than you otherwise, especially when you keep saying that's he's only making excuses as if you think college isn't a legitimate concern. Have you even gone to college? I would assume you haven't, because you only mentioned your high school GPA.

 

As for the 20% thing, it hasn't been proved one way or the other. I can't explain the 1080p Firestrike scores, but the 4K benchmark seems consistent with the actual performance of Intel's integrated graphics and doesn't have the same inexplicable difference between the GTX 750 and HD 6200. I don't have anything to counter the 50% increase in EU's and the 15% decrease in clock speed, though, so you got me there.  ;)

Except the excuse about digging through months of benchmarks is a lie. The chips have not been out for "months". In fact, the 6600k launched the 5th of August. At most, he has to dig through 26 days worth of benchmarks, which isn't hard, seeing as there are only a finite amount of reviews available for it. I myself have already searched, they do not exist. Nobody in their right mind benches an iGPU at 4k because the results will be obvious: Unplayable

 

Also, you understand the man has been here for just over a year, and has almost 9000 posts? That is 10x my post count. He said he is working on his Master's degree, which is 1-6 years of work (generally taking around 3 or 4 years to complete) meaning he managed to make almost 9000 posts, while still working on his Master's degree. If you look at the posts he made today on his account, he was very active from 10am to 12pm. Enough to make 12 posts within that 2 hour time frame. He did not proclaim how busy he was to simply tell you he was busy. He did it because he is a braggart with a massive ego to boost. 

 

 

Eh, I give respect where it's earned and take no bullshit. I've gotten teachers fired and lawyers ruined because of that. As crass and off-putting as it is, it's not arrogance if you actually are as good as you think and sell yourself as, and IBM and their $260,000 starting salary and $50,000 signing bonus agree if I take a job with them at the end of this Fall semester.

 
If his posts make sense, then tell me how they make sense. Tell me that you agree with the 20% number pulled from thin air. Tell me that 50% more EU's at a 15% reduction in core clock still translates to a 50% performance boost, and tell me, from knowing all of this, that GT4e will perform 56% faster than the iris 6200, matching the GTX 950 in speed. Since his posts make sense to you, that means you understand it. If you understand it, you can clearly guide me to reach that same understanding, no?
 
You believe anything he says because he gives off this energy of being highly intelligent, but when in all reality, he is smoke and mirrors. Half of his posts are Anti everything, the other half are Pro Intel. The reason he is pissed off at me, is because i seem to be the only person that questions him when he makes bold lies about intel and their iGPU's. When i ask for proof, he immediately becomes aggressive. This is the 3rd time he has done it, which is why i am relentless in taking him down. Had he been less arrogant in his responses, i wouldn't be so arrogant with mine. 
 
You are correct however, in that i have not gone to college. I find the concept to be worthless given the type of work i plan on doing in my life. While most IT jobs require that piece of paper to get employed, the field i intend to go into does not. In the mean time, i work and study. Granted, its not the 20 hour day that patrick seems to be going through (again, doubtful, but ill humor the notion that he can survive on 4 hours of sleep daily while keeping his wit) but my day is still full of things to do. 
 
You mentioned "the 4k benchmark". Have you seen it? If so, why bother with all of this, why not just give it to me so i can test it once and for all? 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

So, Kepler can't do Tessellation that well, and Maxwell can. NVidia turned up Tessellation up to 11 with gameworks.

 

Maxwell can't do Asynchronous Compute and Pascal (presumably) can.... What's stopping them from focusing game development on A-Sync Compute in the next year and a half?

 

following that logic, it would actually be the architecture after Pascal (or 2 architectures) to support a-sync compute, because you forgot to mention Fermi. 

 

the timeline went Fermi, Kepler and then Maxwell for DX11 refinement.

 

for DX10 Nvidia refined Tesla architecture through 3 iterations (8000 series, 9000 series, 200 series). 

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm just gonna throw this out there.... Nvidia stated that 970 and 980 were fully compliant with direct X 12, although early and I'm just speculating here ... what happens if the cards aren't compatible with the compute aspect? Therefore not fully compliant?

 

Probably a class action lawsuit that will net me about $3.82 in about 15 to 20 years.

Ensure a job for life: https://github.com/Droogans/unmaintainable-code

Actual comment I found in legacy code: // WARNING! SQL injection here!

Link to comment
Share on other sites

Link to post
Share on other sites

-snippity-

Welp, here goes. Let's see if I've gotten any better at arguing when it gets real. First, what 4k benchmark did you want? I kinda got lost somewhere during the discussion. Was it for the HD 530, so you can compare the performance of Broadwell GT2 to Skylake GT2? And is that to confirm the 20% boost that he's claiming? If so, it's easy to find the Firestrike 4k score. Or were you more referring to games? 'Cuz, yeah, I haven't seen that either. Maybe I should rephrase what I said before. I usually take what he says at face value because his speculation usually has some merit to it, and he explains most of his points nicely. In this case, it's up to him to provide those benchmarks you want so badly. So, yes, at this point, I would agree with you that the 20% was pulled out of thin air, unless @patrickjp93 is willing to demonstrate otherwise. 

 

That I'm not contesting, but I really only don't like that you seem so quick to discount what he says even if there's no definitive proof one way or another because of the way you view him. You say he's making excuses, but it's entirely possible that he is referring to rumored benchmarks that appeared months before Skylake's release. Unlikely, but not impossible. And, aside from that, you don't live his life, and you therefore don't know what he goes through on a daily basis, and yet you automatically think he's making excuses. Weren't you just saying that it's foolish to believe the 20% improvement because we can't confirm it? We are well within our rights to believe it, but it would be more logical to wait until the release. Again, you might argue that the fact that he claimed he would be sorting through "months of benchmarks" when it was a little less than a month is proof enough, but again his alleged 10-minute allocated time between activities would play into a mistake like that. Because of my previous experiences with him, I have seen he does not usually make excuses, though you have experienced otherwise, so discussing this won't get anywhere. And you immediately say his high post count is another factor in you thinking he's making excuses, but again, you don't know what his schedule was like last year, and he joined during the usual time summer vacation is active. Furthermore, you don't know when he had free time, how well he is able to balance his classes and schedule due to his apparent intelligence(it was easy enough in high school for me), and many other factors. But, if you want to go ahead and believe what you're saying, that's fine. It doesn't make a difference to me, and certainly not him. Finally, even if he was active during that time frame, do you think his first thought when he woke up was to satisfy a random person on the internet?  I know you reminded him a little after 11:00 A.M., but at that point, I'd consider that nagging and outright annoying. He doesn't have to satisfy you, especially not a nagging brat(just trying to understand his perception of you) on the internet, and while it would be proper for him to do so, it would be perfectly acceptable for you two to drop it and continue on with your lives until Skylake GT3e and GT4e are released and you can both see for yourselves.

 

And here's what Patrick had to say about his alleged 50% boost, in case you didn't see it before or don't remember: "No, what I focused on was the 72 EU figure, or 50% more of them or 50% more SPs than Broadwell's GT3/e which has 48. The 50% better performance I wait to see, but given what I've seen in actual games between Broadwell and Skylake, where Anandtech's results are far from the average for once, it's gonna happen." He's speculating there, but has confidence in his views because of past experience. It still doesn't address the 20%, though, unless I'm missing something. Wait, I think I've got it. He's saying that with the current driver implementation, the HD 6200 is just 10-15% behind the GTX 750. That means with a 20% boost Skylake GT3e would be slightly better, and perhaps drivers will improve slightly by that time too. Then, the GTX 950 has around a 58% boost over the GTX 750, which would put everything on this sort of scale:

 

HD 6200: 1

 

GTX 750: 1.15

 

GTX 950: 1.817

 

Skylate GT4e: 1.53, given the 20% flat boost, the 50% increase in EU count, and the 15% reduction in clock speed. Oh, wait, I didn't even see him mention the reduction in clock speed that you have brought up several times. Where did you hear that, and is it confirmed? This score doesn't beat the GTX 950, but memory bandwidth in the Crystalwell cache will possibly be improved, and system RAM could still be the bottleneck. Remember that he mentioned 3200 MHz RAM, which could compensate for the last 15%.

 

And, just so you know, I'm not trying to discredit you by saying you haven't gone to college, I'm trying to say that you probably don't understand the workings of it.

 

Hmm, I think his arrogance comes from people like you trying to discredit him. He has Asperger's, remember, something my own brother has, and they can work something out in their head that makes perfect sense to them, and he probably doesn't think your posts are sufficient to prove him wrong. See above about my interpretation of the mathematical proof after hearing discussions between all of us.

 

Oh, sorry for the wall of text and how long it took to make this post. I'm not the best at composition at the best of times, and when I concentrate on a single thing for an extended period of time while my retainer is in(which alters my breathing pattern), I tend to get all tingly in most parts of my body due to hyperventilation and a little fuzzy in the head, so it adversely affects my ability to compose a decent post.

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

This ashes of singularity is the only proof so far that amd has better dx12 gain than nvidia, if in the future amd still continues to have a bigger gain than nvidia, I will consider swapping out my 970 for a 390, until then my 970 is staying put as I don't see sufficient evidence, maybe Nvidia are letting amd think they can win so they can mock them.

Gpu: MSI 4G GTX 970 | Cpu: i5 4690k @4.6Ghz 1.23v | Cpu Cooler: Cryorig r1 ultimate | Ram: 1600mhz 2x8Gb corsair vengeance | Storage: sandisk ultra ii 128gb (os) 1TB WD Green | Psu: evga supernova g1 650watt | Case: fractal define s windowed |

Link to comment
Share on other sites

Link to post
Share on other sites

This ashes of singularity is the only proof so far that amd has better dx12 gain than nvidia, if in the future amd still continues to have a bigger gain than nvidia, I will consider swapping out my 970 for a 390, until then my 970 is staying put as I don't see sufficient evidence, maybe Nvidia are letting amd think they can win so they can mock them.

By the time you need to swap out a 970 because of poor FPS, the 390 may not be sufficient at that point, and Arctic Islands/Pascal will be available.

Link to comment
Share on other sites

Link to post
Share on other sites

-Well done argument

I don't know how you were at arguing before, but i will tell you that you are not bad at it. Though, calling me a brat is a bit rude considering i have not called you an insulting name yet. Ill keep this short, because i get tired of repeating myself. 

 

http://linustechtips.com/main/topic/433764-intel-plans-to-support-vesas-adaptive-sync/?p=5837036

 

This is the post i made questioning his views on the GT4e. His logic is that a stock 6200 is only 15% behind a stock 750, and that "overclocking the 6200 makes the gap disappear". That logic is just silly. Why would it be okay to overclock one card, and not the other? The gap will always be there. Given Maxwell's tendency to OC high already puts it leagues out of reach for the 6200. The 15-20% number i gave him was me doing him a favor. He went on to call me blind, and told me to keep my facts straight. When i was probably the only person not out of line asking him where he got his information from.

 

How am i being a brat for questioning unproven logic? We, as human beings, are curious by nature. I simply wanted to know where he got his information, and why he was sure of it. Now i know his only source is 3dmark, which is very inaccurate for gauging how well a product will perform in real gaming. That 20% number no longer means anything to me now. On that notion, i did not discount what he said. I only said the math did not add up.

 

As for you wanting to know where i got the 1ghz clock rate from, i only have two sources. 

http://wccftech.com/intel-skylake-gen9-graphics-architecture-explained-gt2-24-eus-gt3-48-eus-gt4e-72-eus/

https://en.wikipedia.org/wiki/Intel_HD_and_Iris_Graphics

 

Just use Control+F and type GT4e and look for the Iris Pro 580, or "Intel 9th series" and you should see GT4e running at 1ghz with 72 EU's, hitting 1152 Gflops. Again, this is not exactly the confirmed speed (its not released yet) but its the current known speed. Remember, certain SKU's will release the same iGPU, but have it running at different frequencies depending on the nomenclature of the chip itself. 

 

Also, the speed of system memory will not compensate for the 15% difference at all. DDR4 at 4266mhz (It's max theoretical peak) will cap out at 68GB/s. That is still slower than even the GTX 750's 80GB/s. Seeing as the GTX 750 still canno't handle most 1080p titles on high details, it would be pretty foolish to think an iGPU will be able to do it. Like i said before. I expect GT4e to match the 750 Ti, putting it 20% faster than a normal 750, but still 30% slower than a GTX 950. This is not an insult, but a huge compliment to intel's work.

 

Also, his arrogance does not come from people like me "trying to discredit him". He was arrogant long before i spoke to him. In fact, he was arrogant to me over asking a simple question, so clearly its not from "people like me". People like me are responsible for removing that arrogance by matching it, and reminding him that it does nothing to be arrogant. Arrogance intimidates the timid, but it does not work on those who are certain of their abilities. 

 

While i would normally crucify you for purposely involving yourself in an argument you have nothing to do with, your intentions to rectify the situation are respectable. That being said, i'll follow your advice and wait for GT4e to land in consumer hands. If i am wrong, i will apologize. If he is wrong, he will eat his words. Either way, everyone wins and loses. Yay for the inevitability of time!

 

Oh, and Shakaza. If you get in between me and patrick again, i will call upon the might of Macho Man Randy Savage and drop an elbow on you. You've been warned!

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

following that logic, it would actually be the architecture after Pascal (or 2 architectures) to support a-sync compute, because you forgot to mention Fermi. 

 

the timeline went Fermi, Kepler and then Maxwell for DX11 refinement.

 

for DX10 Nvidia refined Tesla architecture through 3 iterations (8000 series, 9000 series, 200 series). 

 

I didn't comment about previous generations cuz I wasn't around the tech-sphere during those days. Only can comment on what I know about  :D

so you may as well be right. dunno.

Ensure a job for life: https://github.com/Droogans/unmaintainable-code

Actual comment I found in legacy code: // WARNING! SQL injection here!

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know how you were at arguing before, but i will tell you that you are not bad at it. Though, calling me a brat is a bit rude considering i have not called you an insulting name yet. Ill keep this short, because i get tired of repeating myself. 

 

http://linustechtips.com/main/topic/433764-intel-plans-to-support-vesas-adaptive-sync/?p=5837036

 

This is the post i made questioning his views on the GT4e. His logic is that a stock 6200 is only 15% behind a stock 750, and that "overclocking the 6200 makes the gap disappear". That logic is just silly. Why would it be okay to overclock one card, and not the other? The gap will always be there. Given Maxwell's tendency to OC high already puts it leagues out of reach for the 6200. The 15-20% number i gave him was me doing him a favor. He went on to call me blind, and told me to keep my facts straight. When i was probably the only person not out of line asking him where he got his information from.

 

How am i being a brat for questioning unproven logic? We, as human beings, are curious by nature. I simply wanted to know where he got his information, and why he was sure of it. Now i know his only source is 3dmark, which is very inaccurate for gauging how well a product will perform in real gaming. That 20% number no longer means anything to me now. On that notion, i did not discount what he said. I only said the math did not add up.

 

As for you wanting to know where i got the 1ghz clock rate from, i only have two sources. 

http://wccftech.com/intel-skylake-gen9-graphics-architecture-explained-gt2-24-eus-gt3-48-eus-gt4e-72-eus/

https://en.wikipedia.org/wiki/Intel_HD_and_Iris_Graphics

 

Just use Control+F and type GT4e and look for the Iris Pro 580, or "Intel 9th series" and you should see GT4e running at 1ghz with 72 EU's, hitting 1152 Gflops. Again, this is not exactly the confirmed speed (its not released yet) but its the current known speed. Remember, certain SKU's will release the same iGPU, but have it running at different frequencies depending on the nomenclature of the chip itself. 

 

Also, the speed of system memory will not compensate for the 15% difference at all. DDR4 at 4266mhz (It's max theoretical peak) will cap out at 68GB/s. That is still slower than even the GTX 750's 80GB/s. Seeing as the GTX 750 still canno't handle most 1080p titles on high details, it would be pretty foolish to think an iGPU will be able to do it. Like i said before. I expect GT4e to match the 750 Ti, putting it 20% faster than a normal 750, but still 30% slower than a GTX 950. This is not an insult, but a huge compliment to intel's work.

 

Also, his arrogance does not come from people like me "trying to discredit him". He was arrogant long before i spoke to him. In fact, he was arrogant to me over asking a simple question, so clearly its not from "people like me". People like me are responsible for removing that arrogance by matching it, and reminding him that it does nothing to be arrogant. Arrogance intimidates the timid, but it does not work on those who are certain of their abilities. 

 

While i would normally crucify you for purposely involving yourself in an argument you have nothing to do with, your intentions to rectify the situation are respectable. That being said, i'll follow your advice and wait for GT4e to land in consumer hands. If i am wrong, i will apologize. If he is wrong, he will eat his words. Either way, everyone wins and loses. Yay for the inevitability of time!

 

Oh, and Shakaza. If you get in between me and patrick again, i will call upon the might of Macho Man Randy Savage and drop an elbow on you. You've been warned!

Sorry, me calling you a brat was just me trying to describe how I think Patrick feels about you. Sorry if I offended you.

 

Yes, I never did understand why he was talking about overclocking the iGPU. That was another thing he said that I didn't really get because it really isn't a legitimate argument.

 

As for your sources for the clock of GT4e, fair enough. That's about as good as we're going to get right now. 

 

I'm not sure about the RAM, to be honest. If it was the bottleneck before, increasing the bandwidth by even 10% would offer fairly good improvement in performance. It would close the gap a little more, I'd assume.

 

And I suppose your comment about his arrogance is true. Maybe I just didn't see it before. Ah well, and thanks.

 

Don't judge me for arguing! When there's something left unsaid in a topic I'm interested in, I feel compelled to fill that gap in the discussion, and I think I did quite nicely, if I do say so myself. There's nothing left to say, really. 

 

Alright, when I inevitably interfere again, you can't say you didn't warn me. :3

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

the shitstorm has already began on reddit :D... people including long year fanboys are pissed off at NV.... returning their cardz for AMD ones.... this DX 12 Fiasco might hit Nvidia hard... if pascal wont support assync compute... which i hardly doubt it will... cuz NV was working on it for the past 2 years.... well it might delayed... cuz of HBM 2 exclusivity for AMD.... win win anyway...:D

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

the shitstorm has already began on reddit :D... people including long year fanboys are pissed off at NV.... returning their cardz for AMD ones.... this DX 12 Fiasco might hit Nvidia hard... if pascal wont support assync compute... which i hardly doubt it will... cuz NV was working on it for the past 2 years.... well it might delayed... cuz of HBM 2 exclusivity for AMD.... win win anyway... :D

Hmm, if AMD does end up winning DX12 by a landslide, I'm sort of screwed since my desktop with a GTX 970 is at home. That would be a very awkward situation, but I suppose it'd still be fine since it's a good card. 

 

At this point, people are waaaaay overreacting. Even if there is some logic behind why AMD brought such a victory in this benchmark, there isn't anything to gain by selling one's graphics card now. It would make more sense to wait until we see how things play out.

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

they are not overeacting... they are doing the right thing... supporting the right company..... the company that creaty open idustry standards... the company that doesnt lie to their customers repeatedly.....develops new technologies etc .... i admit NV supports all other DX 12 features... but those are just cosmetic improvements like lightning and smoke effects.... the oxide dev already said that they could squeeze up 40% more gpu performance if they would fully utilize assync compute to its full potential.... but they barely used it...looking at consoles that use AMD hardware,,, its going to be easier for devs to port games over to pc in vulkan or DX 12 ... cuz drivers wont matter that much like they do now.... if zen is gonna be somehow decent the future looks good for AMD....

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

Hmm, if AMD does end up winning DX12 by a landslide, I'm sort of screwed since my desktop with a GTX 970 is at home. That would be a very awkward situation, but I suppose it'd still be fine since it's a good card. 

 

At this point, people are waaaaay overreacting. Even if there is some logic behind why AMD brought such a victory in this benchmark, there isn't anything to gain by selling one's graphics card now. It would make more sense to wait until we see how things play out.

 

they are not overeacting... they are doing the right thing... supporting the right company..... the company that creaty open idustry standards... the company that doesnt lie to their customers repeatedly.....develops new technologies etc .... i admit NV supports all other DX 12 features... but those are just cosmetic improvements like lightning and smoke effects.... the oxide dev already said that they could squeeze up 40% more gpu performance if they would fully utilize assync compute to its full potential.... but they barely used it...looking at consoles that use AMD hardware,,, its going to be easier for devs to port games over to pc in vulkan or DX 12 ... cuz drivers wont matter that much like they do now.... if zen is gonna be somehow decent the future looks good for AMD....

As much as I don't like to jump to conclusions - the people returning the cards are sort of right - Nvidia has been doing some shady stuff - lying about what their cards can and cannot do, blaming others, despite knowing it's their own fault, locking up games via "gameworks" and similar stuff - their marketting is strong but they dropped the ball - it's pretty much history repeating itself - they shit themselves before and this looks to be the next time they're doing so.

This will also be Nvidia's first time with HBM. Anyone remember what happened when they first tried GDDR5? Thermi - All-in-on GPU toasters and ovens - that's what happened. This time I expect them to break the memory either by overvolting or by pushing it too far - there's a reason AMD denied voltage control for the Fury series.

At any rate - let's see what happens next - where's my popcorn

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

they are not overeacting... they are doing the right thing... supporting the right company..... the company that creaty open idustry standards... the company that doesnt lie to their customers repeatedly.....develops new technologies etc .... i admit NV supports all other DX 12 features... but those are just cosmetic improvements like lightning and smoke effects.... the oxide dev already said that they could squeeze up 40% more gpu performance if they would fully utilize assync compute to its full potential.... but they barely used it...looking at consoles that use AMD hardware,,, its going to be easier for devs to port games over to pc in vulkan or DX 12 ... cuz drivers wont matter that much like they do now.... if zen is gonna be somehow decent the future looks good for AMD....

First of all, please quote people in the future so they see your reply. Thanks. :)

 

Anyway, yes, they are overreacting. I want competition, but this is only one benchmark. One, and yet people are treating it as if it is an average performance increase across the board. Whether you like or dislike Nvidia doesn't matter. It's just stupid and an overreaction given the circumstances. It just would've been smarter for them to wait.

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

At any rate - let's see what happens next - where's my popcorn

Is this what you were looking for?

 

popcorn.gif

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

First of all, please quote people in the future so they see your reply. Thanks. :)

 

Anyway, yes, they are overreacting. I want competition, but this is only one benchmark. One, and yet people are treating it as if it is an average performance increase across the board. Whether you like or dislike Nvidia doesn't matter. It's just stupid and an overreaction given the circumstances. It just would've been smarter for them to wait.

as i said they might support all of the other dx 12 features.... but the one that gives u the fps increase isnt supported... its a hardware issue .. it has been proven on the internet already... a driver fix wont fix it.. so expect pretty similar resullt in other games...

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

as i said they might support all of the other dx 12 features.... but the one that gives u the fps increase isnt supported... its a hardware issue .. it has been proven on the internet already... a driver fix wont fix it.. so expect pretty similar resullt in other games...

Yep, it looks like that at the moment, but I tend to reserve judgement until more light is shed on these types of situations. :)

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry, me calling you a brat was just me trying to describe how I think Patrick feels about you. Sorry if I offended you.

 

Yes, I never did understand why he was talking about overclocking the iGPU. That was another thing he said that I didn't really get because it really isn't a legitimate argument.

 

As for your sources for the clock of GT4e, fair enough. That's about as good as we're going to get right now. 

 

I'm not sure about the RAM, to be honest. If it was the bottleneck before, increasing the bandwidth by even 10% would offer fairly good improvement in performance. It would close the gap a little more, I'd assume.

 

And I suppose your comment about his arrogance is true. Maybe I just didn't see it before. Ah well, and thanks.

 

Don't judge me for arguing! When there's something left unsaid in a topic I'm interested in, I feel compelled to fill that gap in the discussion, and I think I did quite nicely, if I do say so myself. There's nothing left to say, really. 

 

Alright, when I inevitably interfere again, you can't say you didn't warn me. :3

Unless Intel starts adding abundant amounts of eDRAM on to these iGPU's, i do not see them exceeding 1080p gaming any time soon. DDR4 will not alleviate that pain. Again, assuming someone was able to get their DDR4 dimms to run at 4266mhz stable, they would have 68GB/s bandwidth. That is still 17.7% behind the GTX 965m (the mobile equivalent of the GTX 750 Ti). We can look at that card and see that it's gaming performance at 1440p is awful. However, i have high hope for GT4e at 1080p. Remember, the GTX 965m averages roughly 40fps on 1080p ultra settings on GTA 5.

 

The GTX 750 Ti also averages 40 FPS on ultra at 1080p (granted, some settings might differ slightly, and its a different part of the game, both the 965 and 750 ti perform very close to each other, within a very small margin of error).

 

If the chip is as strong as i predict, it will be performing as good as the GPU's in both of those video's are performing. That is still a big improvement over the Iris Pro 6200, which could average 60fps, but only on lower settings. 

 

Here is the GTX 950 doing GTA 5 on ultra settings.

https://youtu.be/wSeXuPtTo2E?t=119

 

Averages roughly 52/53 FPS, 30% higher than the other two GPU's linked above. Exactly where it should measure in, given all of our previous math of the card being 30% faster than the 750 Ti. If Patrick is right, the new GT4e chips should be performing just like this. 

 

What we can assume is that it will be enough for 1080p gaming, but current memory solutions prevent us from going beyond that on these iGPU's. Either way, It is still far superior to console graphics, which can hardly run these games on high details at 900p, let alone 1080p. Perhaps when we get another "next generation" console, Intel might be seen as a viable option to Sony or MS.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

Yep, this seems reasonable. But whoever is right, integrated graphics will soon be viable for more than just casual gaming. :D

 

I'm really looking forward to getting a laptop this Fall or Winter that can play BF4 or GTA V, unlike my current laptop with an i5-2410m with Intel HD 3000 and 4 GB of RAM that can barely manage 1 hour battery life while watching a video. It is not the ideal college laptop.

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

Known this for a long time. I'm surprised people are fighting about it now. 

Computing enthusiast. 
I use to be able to input a cheat code now I've got to input a credit card - Total Biscuit
 

Link to comment
Share on other sites

Link to post
Share on other sites

Basically from what I'm gathering, architecture differences are at play here, perhaps.

this needs further clarification from an nVidia source because Maxwell can do async compute

there is no question "if Maxwell can do async compute" - they do it already in TESLA accelerators

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×