Jump to content

[UPDATE 3 - Sapphire's Reference Costs $649 USD] AMD Reveals R9 Nano Benchmarks Ahead of Launch

HKZeroFive

-snippity-

It seems to me that the GTX 950 is about 58% faster than the GTX 750 in Firestrike(yes, Firestrike isn't the best measure for real-world performance, but it is the most fair benchmark I could find in a single place: http://www.hardware.fr/articles/941-8/benchmark-3dmark-fire-strike.html), so assuming the rest of his math is right, it leaves us with:

 

GTX 950: 1.58 * 1.2 = 1.896

 

GT4e = 1.5 * 1.2 = 1.8

 

It would make sense that GT4e is 50% faster than GT3e, but I'm still not convinced that the Skylake GT3e will be equal to a GTX 750. Considering architectural improvements for Skylake over Broadwell, it's possible, but not a sure thing. 10% seems realistic as the improvement, but I dunno.

 

In any case, GT4e is gonna get relatively close to the GTX 950, and it's very exciting.

 

I'm actually disappointed in @patrickjp93 for not replying to you. I've always seen him reply to posts like these with brilliant comebacks, even if I can't personally see any footing for him to stand on. I'm not sure if he was simply annoyed by your attitude (I know I would be) or he really didn't have anything else to come back with. That leaves a little college freshman to take the reins for him.

 

Now to address NoteBookCheck's benchmarking. Sometimes it seems fine, but others it seems to be inconsistent and all over the place. A lot of it, I think, has to do with the different CPUs they test. Because they're mobile, some are dual-cores and have low clock speeds, they seem to hold even weak integrated graphics back. To be fair, there aren't a whole lot of other benchmarks around for integrated mobile graphics, but bear that in mind. :D For example, the i7-5775c has the Intel HD 6200, which is better than the HD 6000 in that it has the eDRAM and 15% higher clocks. I don't think that difference would account for a doubling in framerate in Tomb Raider or even higher in Bioshock Infinite(http://www.guru3d.com/articles_pages/core_i7_5775c_processor_review_desktop_broadwell,14.html), but, hey, I could be wrong. What I need to do now is find benchmarks for the graphics scores of the two iGPUs in Firestrike or whatever to see if that adds up...

 

Also, why didn't you address the differences in drivers that he claimed? Did you think it was bogus because you disagree with the rest of his post, or could you not find anything to confirm those claims? Stuff like that is hard to find, sure, since Intel driver releases aren't big deals like AMD's or NVIDIA's, but you could've asked him. xD

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

It seems to me that the GTX 950 is about 58% faster than the GTX 750 in Firestrike(yes, Firestrike isn't the best measure for real-world performance, but it is the most fair benchmark I could find in a single place: http://www.hardware.fr/articles/941-8/benchmark-3dmark-fire-strike.html), so assuming the rest of his math is right, it leaves us with:

 

GTX 950: 1.58 * 1.2 = 1.896

 

GT4e = 1.5 * 1.2 = 1.8

 

It would make sense that GT4e is 50% faster than GT3e, but I'm still not convinced that the Skylake GT3e will be equal to a GTX 750. Considering architectural improvements for Skylake over Broadwell, it's possible, but not a sure thing. 10% seems realistic as the improvement, but I dunno.

 

In any case, GT4e is gonna get relatively close to the GTX 950, and it's very exciting.

 

I'm actually disappointed in @patrickjp93 for not replying to you. I've always seen him reply to posts like these with brilliant comebacks, even if I can't personally see any footing for him to stand on. I'm not sure if he was simply annoyed by your attitude (I know I would be) or he really didn't have anything else to come back with. That leaves a little college freshman to take the reins for him.

 

Now to address NoteBookCheck's benchmarking. Sometimes it seems fine, but others it seems to be inconsistent and all over the place. A lot of it, I think, has to do with the different CPUs they test. Because they're mobile, some are dual-cores and have low clock speeds, they seem to hold even weak integrated graphics back. To be fair, there aren't a whole lot of other benchmarks around for integrated mobile graphics, but bear that in mind. :D For example, the i7-5775c has the Intel HD 6200, which is better than the HD 6000 in that it has the eDRAM and 15% higher clocks. I don't think that difference would account for a doubling in framerate in Tomb Raider or even higher in Bioshock Infinite(http://www.guru3d.com/articles_pages/core_i7_5775c_processor_review_desktop_broadwell,14.html), but, hey, I could be wrong. What I need to do now is find benchmarks for the graphics scores of the two iGPUs in Firestrike or whatever to see if that adds up...

 

Also, why didn't you address the differences in drivers that he claimed? Did you think it was bogus because you disagree with the rest of his post, or could you not find anything to confirm those claims? Stuff like that is hard to find, sure, since Intel driver releases aren't big deals like AMD's or NVIDIA's, but you could've asked him. xD

My attitude comes from my history with Patrick. In those threads i linked earlier, i ask him a question without any aggression, and he will resort to insults rather than giving me a legitimate answer, or even a "well, its just a guess at this point" kind of post. He speaks in absolutes, when he has no evidence to back up his claims.

 

The only way for his "GT4e = 1.5 * 1.2 = 1.8" to be true, is if that 20% number he pulled out of thin air is true. That 50% is also random. GT4e will have 50% more EU's, that is true, but the clock rate will not be the same. It has lowered from 1150mhz to 1000mhz. This means scaling is not linear. Therefore i cannot believe that GT4e is "50% faster" until i see it is "50% faster". 

 

You are correct in your statement about notebookcheck's benchmarks. Sadly, there are no other places that really bench the GPU's, because half of them are only on mobile SKU's. It is impossible to get a fair comparison when comparing mobile SKU's to desktop SKU's in terms of graphics, because the desktop CPU's always have an advantage over mobile, which helps skew the comparison even more. He was making the claim that since Skylake GT2 (6600k and 6700k CPU's) is 20% faster than Broadwell GT2 (5700HQ mobile). How can we compare this when the mobile CPU is quite inferior to the desktop CPU's? 

 

http://ark.intel.com/products/87716/Intel-Core-i7-5700HQ-Processor-6M-Cache-up-to-3_50-GHz

http://ark.intel.com/products/88195/Intel-Core-i7-6700K-Processor-8M-Cache-up-to-4_20-GHz

 

My point is, you can't just throw out a number and use it as a performance metric, when it cannot be proven. It could be 20%, it could also be higher or lower. We simply do not know. We need mobile skylake SKU's to come out with comparable CPU's to match the mobile ones in clock speeds, and then factor in IPC improvements. From there, we compare the iGPU's against each other to get an accurate representation of improvement over generations. I still think GT4e will be 20-30% slower than the GTX 950, maybe even slower than that. That's not even solving the memory bandwidth issue. Patrick said that with DDR4 3200mhz, memory bandwidth would be okay. DDR4 3200mhz is 51200MB/sec (51.2GB). How will this compete with the GTX 950's 105600MB/sec (105.6GB) memory bandwidth? Do not tell me the 128mb of eDRAM will be enough to handle 1080p gaming without needing to go to memory, because that is silly.

 

Realistically, i think GT4e will match the GTX 750 Ti. That is not a bad thing, contrary to what people seem to think when trying to convince me it will match the 950. The 750 Ti is already stronger than the GTX 950m mobile GPU, and matches the GTX 960m in gaming. When i say GT4e will match the 750 Ti, i mean that as a real compliment.

 

 

 

I'm actually disappointed in @patrickjp93 for not replying to you. I've always seen him reply to posts like these with brilliant comebacks, even if I can't personally see any footing for him to stand on. I'm not sure if he was simply annoyed by your attitude (I know I would be) or he really didn't have anything else to come back with. That leaves a little college freshman to take the reins for him.

 

For the attitude part, i'll just leave this here. 

 

I reiterate, are you just this stupid?

 

 I eagerly await this half-baked response of yours. It's not cockiness when you actually are the best in the room. Bring someone better.

 
For you taking over for him, ill say "Awesome". You not only provided sources for your opinions, you also did not insult me over asking a question. While you are not as entertaining as the self-destructing patrickjp93, you are at the very least, far more objective in your thinking. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

My point is, you can't just throw out a number and use it as a performance metric, when it cannot be proven. It could be 20%, it could also be higher or lower. We simply do not know.

My guess is that he was trying to make an educated guess based on speculation and what has happened in the past. Perhaps there have been previous instances where the average improvement of the iGPU was 20% with the same EU count, but that too carries with it its own host of problems. 

 

 

 

The only way for his "GT4e = 1.5 * 1.2 = 1.8" to be true, is if that 20% number he pulled out of thin air is true. That 50% is also random. GT4e will have 50% more EU's, that is true, but the clock rate will not be the same. It has lowered from 1150mhz to 1000mhz. This means scaling is not linear. Therefore i cannot believe that GT4e is "50% faster" until i see it is "50% faster". 

 

Realistically, i think GT4e will match the GTX 750 Ti. That is not a bad thing, contrary to what people seem to think when trying to convince me it will match the 950. The 750 Ti is already stronger than the GTX 950m mobile GPU, and matches the GTX 960m in gaming. When i say GT4e will match the 750 Ti, i mean that as a real compliment.

 

 

For the attitude part, i'll just leave this here. 

 
 
For you taking over for him, ill say "Awesome". You not only provided sources for your opinions, you also did not insult me over asking a question. While you are not as entertaining as the self-destructing patrickjp93, you are at the very least, far more objective in your thinking. 

 

D'oh, I completely forgot to check the frequency that GT4e will run at. And, again, this is speculation, but architectural improvements could compensate for that deficit in clock speeds, but I'm not gonna treat it like fact because we really don't know. I'd LIKE for Skylake's GT4e to get close to the GTX 950, but at this point it doesn't seem likely.

 

I think this discussion is just about done. There are waaaaay too many variables at play here to accurately determine the exact performance of Skylake's best iGPU, so let's let things play out. I personally think that the minimum performance will be GTX 750 Ti levels, if Intel can keep up its track record of blazing improvements in integrated graphics. Memory bandwidth will still be problematic, though, since very few people are going to have 3200 MHz DDR4 RAM.  :lol:  I'm sorry, but I still find that funny. 

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

My guess is that he was trying to make an educated guess based on speculation and what has happened in the past. Perhaps there have been previous instances where the average improvement of the iGPU was 20% with the same EU count, but that too carries with it its own host of problems. 

 

 

D'oh, I completely forgot to check the frequency that GT4e will run at. And, again, this is speculation, but architectural improvements could compensate for that deficit in clock speeds, but I'm not gonna treat it like fact because we really don't know. I'd LIKE for Skylake's GT4e to get close to the GTX 950, but at this point it doesn't seem likely.

 

I think this discussion is just about done. There are waaaaay too many variables at play here to accurately determine the exact performance of Skylake's best iGPU, so let's let things play out. I personally think that the minimum performance will be GTX 750 Ti levels, if Intel can keep up its track record of blazing improvements in integrated graphics. Memory bandwidth will still be problematic, though, since very few people are going to have 3200 MHz DDR4 RAM.  :lol:  I'm sorry, but I still find that funny. 

You have no idea how badly i want to be wrong about GT4e. I want it to be as amazing as Patrick thinks it will be, which is why i asked him to cite sources. If he had sources to prove his claims, i would probably hold off on buying a 6700k because of it. 

BTW, the architectural improvement is that 1.2 number. He thinks the architectural improvement will provide a 20% boost. He pulled 1.5 from the fact that the EU's were increased by 50%. His 20% number came from him making a direct comparison to Broadwell GT2 (Those mobile chips i talked about earlier) and Skylake GT2 (The current 6600k and 6700k iGPU's). Seeing as there are too many variables, and not enough controls, i felt that the 20% number could not be considered fair, and asked him to provide a link or source to where he got that number. Now that i know it was a guess, and the reasoning behind that guess, i can safely conclude that it won't be an accurate representation. 

 

Also, DDR4 is actually getting faster. Those 2133mhz kits you see running CL15, can be overclocked to 3200mhz CL15 quite easily. Remember, DDR4 operates at 1.2V stock, and those highly clocked kits are pushing 1.5-1.65V. This means those slower kits still have plenty of overclocking headroom left in them to reach the speeds patrick was talking about. DDR4 is slated to support up to 4266mhz, so we are not finished yet. Theoretical peak would then be 68000MB/s, or 68GB/s. Not quite hitting the GTX 950, but very close to the GTX 960m's 80GB/s. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You have no idea how badly i want to be wrong about GT4e. I want it to be as amazing as Patrick thinks it will be, which is why i asked him to cite sources. If he had sources to prove his claims, i would probably hold off on buying a 6700k because of it. 

BTW, the architectural improvement is that 1.2 number. He thinks the architectural improvement will provide a 20% boost. He pulled 1.5 from the fact that the EU's were increased by 50%. His 20% number came from him making a direct comparison to Broadwell GT2 (Those mobile chips i talked about earlier) and Skylake GT2 (The current 6600k and 6700k iGPU's). Seeing as there are too many variables, and not enough controls, i felt that the 20% number could not be considered fair, and asked him to provide a link or source to where he got that number. Now that i know it was a guess, and the reasoning behind that guess, i can safely conclude that it won't be an accurate representation. 

 

Also, DDR4 is actually getting faster. Those 2133mhz kits you see running CL15, can be overclocked to 3200mhz CL15 quite easily. Remember, DDR4 operates at 1.2V stock, and those highly clocked kits are pushing 1.5-1.65V. This means those slower kits still have plenty of overclocking headroom left in them to reach the speeds patrick was talking about. DDR4 is slated to support up to 4266mhz, so we are not finished yet. Theoretical peak would then be 68000MB/s, or 68GB/s. Not quite hitting the GTX 950, but very close to the GTX 960m's 80GB/s. 

I really have nothing to say at this point, aside from the derp about the architectural improvement, but whatever. xD

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

It seems to me that the GTX 950 is about 58% faster than the GTX 750 in Firestrike(yes, Firestrike isn't the best measure for real-world performance, but it is the most fair benchmark I could find in a single place: http://www.hardware.fr/articles/941-8/benchmark-3dmark-fire-strike.html), so assuming the rest of his math is right, it leaves us with:

 

GTX 950: 1.58 * 1.2 = 1.896

 

GT4e = 1.5 * 1.2 = 1.8

 

It would make sense that GT4e is 50% faster than GT3e, but I'm still not convinced that the Skylake GT3e will be equal to a GTX 750. Considering architectural improvements for Skylake over Broadwell, it's possible, but not a sure thing. 10% seems realistic as the improvement, but I dunno.

 

In any case, GT4e is gonna get relatively close to the GTX 950, and it's very exciting.

 

I'm actually disappointed in @patrickjp93 for not replying to you. I've always seen him reply to posts like these with brilliant comebacks, even if I can't personally see any footing for him to stand on. I'm not sure if he was simply annoyed by your attitude (I know I would be) or he really didn't have anything else to come back with. That leaves a little college freshman to take the reins for him.

 

Now to address NoteBookCheck's benchmarking. Sometimes it seems fine, but others it seems to be inconsistent and all over the place. A lot of it, I think, has to do with the different CPUs they test. Because they're mobile, some are dual-cores and have low clock speeds, they seem to hold even weak integrated graphics back. To be fair, there aren't a whole lot of other benchmarks around for integrated mobile graphics, but bear that in mind. :D For example, the i7-5775c has the Intel HD 6200, which is better than the HD 6000 in that it has the eDRAM and 15% higher clocks. I don't think that difference would account for a doubling in framerate in Tomb Raider or even higher in Bioshock Infinite(http://www.guru3d.com/articles_pages/core_i7_5775c_processor_review_desktop_broadwell,14.html), but, hey, I could be wrong. What I need to do now is find benchmarks for the graphics scores of the two iGPUs in Firestrike or whatever to see if that adds up...

 

Also, why didn't you address the differences in drivers that he claimed? Did you think it was bogus because you disagree with the rest of his post, or could you not find anything to confirm those claims? Stuff like that is hard to find, sure, since Intel driver releases aren't big deals like AMD's or NVIDIA's, but you could've asked him. xD

We already have 24% improvements between GT2 generations Broadwell vs. Skylake. With Amdahl's Law Scaling, 20% in GT3e should be more than reasonable before we even get eDRAM improvements involved.

 

I started senior year this past Monday and didn't have the time to reply to his drivel as I was setting up capstone and research advisement meeting times along with TA office hour meetings in addition to homework and other planning. And seeing that his reply lacks any substance, I'm not going to, because he didn't defeat me on mine.

 

And yes, notebookcheck is a terrible source with no consistency. Only amateurs or fanboys or trolls use it with any degree of seriousness in attempting to support their points.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

We already have 24% improvements between GT2 generations Broadwell vs. Skylake. With Amdahl's Law Scaling, 20% in GT3e should be more than reasonable before we even get eDRAM improvements involved.

 

I started senior year this past Monday and didn't have the time to reply to his drivel as I was setting up capstone and research advisement meeting times along with TA office hour meetings in addition to homework and other planning. And seeing that his reply lacks any substance, I'm not going to, because he didn't defeat me on mine.

 

And yes, notebookcheck is a terrible source with no consistency. Only amateurs or fanboys or trolls use it with any degree of seriousness in attempting to support their points.

Again, the only GT2 Broadwell SKU's that exist are mobile. The only GT2 Skylake chips you have to compare them to, are desktop. Any sane person that follows the scientific method will tell you that there are too many variables in this situation and a severe lack of control. Your numbers cant be properly tested, therefore, impossible to replicate. You pulled them out of thin air. That is why you won't take me head on, you already know what will become of it. Once you show me, or anyone else on this forum where that 24%, or 20% for that matter, comes from, then we (i) will be more inclined to believe you. Until then, enjoy the ride.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Again, the only GT2 Broadwell SKU's that exist are mobile. The only GT2 Skylake chips you have to compare them to, are desktop. Any sane person that follows the scientific method will tell you that there are too many variables in this situation and a severe lack of control. Your numbers cant be properly tested, therefore, impossible to replicate. You pulled them out of thin air. That is why you won't take me head on, you already know what will become of it. Once you show me, or anyone else on this forum where that 24%, or 20% for that matter, comes from, then we (i) will be more inclined to believe you. Until then, enjoy the ride.

I miss the days of desktop motherboards for mobile CPU.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Again, the only GT2 Broadwell SKU's that exist are mobile. The only GT2 Skylake chips you have to compare them to, are desktop. Any sane person that follows the scientific method will tell you that there are too many variables in this situation and a severe lack of control. Your numbers cant be properly tested, therefore, impossible to replicate. You pulled them out of thin air. That is why you won't take me head on, you already know what will become of it. Once you show me, or anyone else on this forum where that 24%, or 20% for that matter, comes from, then we (i) will be more inclined to believe you. Until then, enjoy the ride.

Same clock speed, same exact configuration, and with no thermal throttling they'll run at exactly the same performance levels. Intel's iGPUs have always been less reactive to RAM speed differences too which is quite convenient.

 

Nope, no variables at all, thanks to Apple's designs for thermal solutions. GT2 Broadwell at 1050MHz vs. GT2 Skylake at 1050MHz with 0 throttling is a 24% difference. You have no reason to suspect that won't carry forward under Amdahl Scaling, and you've provided no evidence to the contrary, which you owe at this point if you want anyone to take you seriously.

 

I've already told you. Go check out 3DMark databases if you like. Windows 8.1 dualboot on the Broadwell Macbook Air non-retina. No throttling, same clock speeds, no variables. Bon voyage junior.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Same clock speed, same exact configuration, and with no thermal throttling they'll run at exactly the same performance levels. Intel's iGPUs have always been less reactive to RAM speed differences too which is quite convenient.

 

Nope, no variables at all, thanks to Apple's designs for thermal solutions. GT2 Broadwell at 1050MHz vs. GT2 Skylake at 1050MHz with 0 throttling is a 24% difference. You have no reason to suspect that won't carry forward under Amdahl Scaling, and you've provided no evidence to the contrary, which you owe at this point if you want anyone to take you seriously.

 

I've already told you. Go check out 3DMark databases if you like. Windows 8.1 dualboot on the Broadwell Macbook Air non-retina. No throttling, same clock speeds, no variables. Bon voyage junior.

So you have benchmarks of someone dialing down the Skylake desktop chips to 3.3ghz and testing the iGPU performance between the two? If so, why not use that as evidence, rather than just saying you've seen it?

 

Show me someone dialing the CPU's down to match each others clock speeds, we will factor in the IPC improvement, and then calculate iGPU vs iGPU. That will settle this once and for all. BTW, i have not once questioned the Macbook throttling. You do not need to keep bringing it up. I only questioned the difference in CPU frequencies skewing the results of any benchmark performed on them. Once you provide that evidence, i'll be happy.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

So you have benchmarks of someone dialing down the Skylake desktop chips to 3.3ghz and testing the iGPU performance between the two? If so, why not use that as evidence, rather than just saying you've seen it?

 

Show me someone dialing the CPU's down to match each others clock speeds, we will factor in the IPC improvement, and then calculate iGPU vs iGPU. That will settle this once and for all. BTW, i have not once questioned the Macbook throttling. You do not need to keep bringing it up. I only questioned the difference in CPU frequencies skewing the results of any benchmark performed on them. Once you provide that evidence, i'll be happy.

You don't need to at 4K. The performance difference and variance caused by such a change in the CPU clocks would be in the margin of error and you know it, because even big iron dGPUs are still the bottleneck at 4K. You're chasing a red herring, and no one here is falling for it.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You don't need to at 4K. The performance difference and variance caused by such a change in the CPU clocks would be in the margin of error and you know it, because even big iron dGPUs are still the bottleneck at 4K. You're chasing a red herring, and no one here is falling for it.

Show me a single person that has benched the Intel HD 530 at 4k. Go ahead, ill wait. I've been searching non-stop for that ever since the argument started, and it has yet to come up. Simply because nobody has seemed to do it. The same can be said of the Intel HD 5600 (The broadwell GT2 iGPU you compared it to). Give me that piece of evidence. It's your only chance at winning this. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Show me a single person that has benched the Intel HD 530 at 4k. Go ahead, ill wait. I've been searching non-stop for that ever since the argument started, and it has yet to come up. Simply because nobody has seemed to do it. The same can be said of the Intel HD 5600 (The broadwell GT2 iGPU you compared it to). Give me that piece of evidence. It's your only chance at winning this. 

http://www.3dmark.com/search?_ga=1.159418683.786654399.1440966265#/?mode=advanced&url=/proxycon/ajax/search/gpu/fs/R/1045/500000?minScore=0&gpuName=Intel HD Graphics 530

 

http://www.3dmark.com/search?_ga=1.159418683.786654399.1440966265#/?mode=advanced&url=/proxycon/ajax/search/gpu/fs/R/981/500000?minScore=0&gpuName=Intel HD Graphics 5300 Mobile

 

http://www.3dmark.com/search?_ga=1.159418683.786654399.1440966265#/?mode=advanced&url=/proxycon/ajax/search/gpu/fs/R/1014/500000?minScore=0&gpuName=Intel HD Graphics 5500

 

Just because its database isn't updated for the new chips or drivers to verify them doesn't mean people haven't tried, genius. Second, it looks like that gap got a lot wider, almost a 70% delta now.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Why do you keep mocking me with broken links? You did this in the other thread too, lol.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Why do you keep mocking me with broken links? You did this in the other thread too, lol.

They work properly if you uncheck the "Show Only Valid Results" thing. :D

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

So it has the same amount of Stream Cores, 200 MHz lower clock speed, and doesn't need an AiO to cool? Either the Voltage scales horribly of Fiji, or this chip is very highly binned.

Just remember: Random people on the internet ALWAYS know more than professionals, when someone's lying, AND can predict the future.

i7 9700K (5.2Ghz @1.2V); MSI Z390 Gaming Edge AC; Corsair Vengeance RGB Pro 16GB 3200 CAS 16; H100i RGB Platinum; Samsung 970 Evo 1TB; Samsung 850 Evo 500GB; WD Black 3 TB; Phanteks 350x; Corsair RM19750w.

 

Laptop: Dell XPS 15 4K 9750H GTX 1650 16GB Ram 256GB SSD

Spoiler

sex hahaha

Link to comment
Share on other sites

Link to post
Share on other sites

So it has the same amount of Stream Cores, 200 MHz lower clock speed, and doesn't need an AiO to cool? Either the Voltage scales horribly of Fiji,

probably some binning and also for any architecture there is an optimal point for max efficiency. But Fury X doesn't "need" an AIO cooler either. Some people have unlocked regular Fury to Fury X and it was fine on the air cooler. But shipping with the AIO was probably a plan to try and differentiate their product and it allows them to win all the temperature and noise tests.
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×