Jump to content

There's certainly better ways of putting it than causing unrest in the forum. AMD is up there next to Haswell core performance and people still reference Bulldozer.

While I agree with the first part, not too sure on your 2nd claim as it doesn't cover their product range right now.

 

Well Bulldozer and its derivatives is the architecture that is used on the majority of AMDs product lines... Desktop (Kaveri/Piledriver) and mobile- there really aren't many Laptops featuring AMDs Kaveri flagships like the FX7600P so we're stuck with richland based stuff like the 5757M, and the Carrizo APUs I've seen ULV and don't compete with Intel's best efforts for CPU performance and Intel are still pushing the power consumption envelope to where AMD can't reach them without a die shrink or heavy performance hit.

Link to comment
Share on other sites

Link to post
Share on other sites

Does LTT read my mind O.o 

Current system - ThinkPad Yoga 460

ExSystems

Spoiler

Laptop - ASUS FX503VD

|| Case: NZXT H440 ❤️|| MB: Gigabyte GA-Z170XP-SLI || CPU: Skylake Chip || Graphics card : GTX 970 Strix || RAM: Crucial Ballistix 16GB || Storage:1TB WD+500GB WD + 120Gb HyperX savage|| Monitor: Dell U2412M+LG 24MP55HQ+Philips TV ||  PSU CX600M || 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4K CPU benchmarking? Wtf @Slick? This is really unprofessional and makes me question if you guys should be informing the public about anything. Can you please consult your community more often with video's like these instead of just using them as clickcows? Many people would've otherwise told you 4K CPU benchmarking makes no sense. Aswell as testing a game that can max a 970 on a 4670K running 800mhz (TR2013).

 

I mean, thinking that only enabling 2 cores on a 5960X is totally the same as a G3258 (disregarding 20MB vs 3MB Cache and the fact it's Haswell-E) already made me scratch my head, but I atleast understood that it was easier and faster to test that way (instead of swapping CPU's) and it can still prove the point you were trying to make if you don't look at absolute values. But this just makes no sense and any techsavvy person who's been toying around with computers their whole life like you guys have should never make these rookie mistakes.

 

Either you're pandering to your "real world benchmark"-demanding audience, or you've got no clue. And I don't like either scenario's.

Link to comment
Share on other sites

Link to post
Share on other sites

Slick loves his 2600k 5ghzz ..good.

 

Honestly, Sandybridge is still perfectly capable of handle todays tasks, and that gen overclocks like a beast compared to new generations.

 

let me put it this way, iv got a 3930k that I have a modest 4.6ghz OC on. I could go higher but I see no point, I gain no benefits in games I play by doing so..still..after a Ivybridge, Haswell, ,and likely even Broadwell, have come out, I still see no reason to go faster than my sandybridge 3930k 4.6ghz.

 

So no need to upgrade yet Luke.

 

The IPC increases between each generation since sandybridge have 'almost' been neutralised by the lower average OC u can get on the newer generations.

A Sandybbridge at 5ghz ..piece of cake

A Haswell ,even a 4790k, at 5ghz, not so much. Its possible, but much harder to achieve for the average user on an average cooler.

 

A realistic comparison of OC's between a 2600k and 4790k would be 5ghz vs 4.6ghz on the same decent air cooling. That difference in OC would likely negate any IPC performance improvement between the generations.

CPU: Intel i7 3930k w/OC & EK Supremacy EVO Block | Motherboard: Asus P9x79 Pro  | RAM: G.Skill 4x4 1866 CL9 | PSU: Seasonic Platinum 1000w Corsair RM 750w Gold (2021)|

VDU: Panasonic 42" Plasma | GPU: Gigabyte 1080ti Gaming OC & Barrow Block (RIP)...GTX 980ti | Sound: Asus Xonar D2X - Z5500 -FiiO X3K DAP/DAC - ATH-M50S | Case: Phantek Enthoo Primo White |

Storage: Samsung 850 Pro 1TB SSD + WD Blue 1TB SSD | Cooling: XSPC D5 Photon 270 Res & Pump | 2x XSPC AX240 White Rads | NexXxos Monsta 80x240 Rad P/P | NF-A12x25 fans |

Link to comment
Share on other sites

Link to post
Share on other sites

at higher FPS gaming the q6600 will suffer badly

If your grave doesn't say "rest in peace" on it You are automatically drafted into the skeleton war.

Link to comment
Share on other sites

Link to post
Share on other sites

I think something is wrong here... 32 FPS on a 5960X/980? 

 

I expected the q6600 to be shite, but there is NO way they got 32fps out of that setup.

Specs: i7 4790k, r9 280 windforce OC, 8gb hyperx fury 1866 RAM, z97 PC Mate, 256gb mx100, black pearl r4, 2tb WD Red (soon)

Link to comment
Share on other sites

Link to post
Share on other sites

I think something is wrong here... 32 FPS on a 5960X/980? 

 

I expected the q6600 to be shite, but there is NO way they got 32fps out of that setup.

It was at 4k with 4 cores and no HT, and a clock nerf to 2.4ghz. I am not a big fan of 4k CPU benching but I assume it's because 4k is becoming a standard.

Link to comment
Share on other sites

Link to post
Share on other sites

4K CPU benchmarking? Wtf @Slick? This is really unprofessional and makes me question if you guys should be informing the public about anything. Can you please consult your community more often with video's like these instead of just using them as clickcows? Many people would've otherwise told you 4K CPU benchmarking makes no sense. Aswell as testing a game that can max a 970 on a 4670K running 800mhz (TR2013).

 

I mean, thinking that only enabling 2 cores on a 5960X is totally the same as a G3258 (disregarding 20MB vs 3MB Cache and the fact it's Haswell-E) already made me scratch my head, but I atleast understood that it was easier and faster to test that way (instead of swapping CPU's) and it can still prove the point you were trying to make if you don't look at absolute values. But this just makes no sense and any techsavvy person who's been toying around with computers their whole life like you guys have should never make these rookie mistakes.

 

Either you're pandering to your "real world benchmark"-demanding audience, or you've got no clue. And I don't like either scenario's.

 

Give this man a prize!

4k cpu benchmarking makes NO sense lol , the games are gpu bound.

Here according to this benchmark the pentium K is as good as a 4790k , LETS ALL BUY PENTIUMS who needs a 4790k.

Ridiculous..

6526_36_intel_pentium_g3258_haswell_20th

Link to comment
Share on other sites

Link to post
Share on other sites

No offense to the LMG team, but this was one of those videos that was a complete fail.

 

Let me elaborate:

Benchmarks and games don't do this video justice. Even having the same clocks and core count, Luke forgot one important thing:

Instruction set extensions! Since the 5960X has AVX1/2 and SSE3+ and the Q6600 doesn't.

 

It is like comparing apples to oranges. 

 

The video was supposed to be in regards to "all MHz" being created equal, which technically all MHz are equal (its just a multiple of 1/second). 

 

It doesn't stop at just the instruction set extension. What about microprocessor architectural changes and how instructions are piped through the scheduler?

You guys missed out on some big points. 

 

Also, there was a fallacy in benchmarking. If you truly wanted to compare Mhz to Mhz, then you should have benchmarked using the same instruction set extensions on both chips, ie: SSE3. I'd be curious to see what difference in performance is when both benchmarks are running on the same instruction set extension.

 

Sorry about the rant, just wanted express my opinion that this video wasn't "up to snuff" (imho).

 

A bit more research or even bringing up these points in the video would have been sufficient to say the least.

▶ Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning. - Einstein◀

Please remember to mark a thread as solved if your issue has been fixed, it helps other who may stumble across the thread at a later point in time.

Link to comment
Share on other sites

Link to post
Share on other sites

While I agree with the first part, not too sure on your 2nd claim as it doesn't cover their product range right now.

 

Well Bulldozer and its derivatives is the architecture that is used on the majority of AMDs product lines... Desktop (Kaveri/Piledriver) and mobile- there really aren't many Laptops featuring AMDs Kaveri flagships like the FX7600P so we're stuck with richland based stuff like the 5757M, and the Carrizo APUs I've seen ULV and don't compete with Intel's best efforts for CPU performance and Intel are still pushing the power consumption envelope to where AMD can't reach them without a die shrink or heavy performance hit.

Technically they do compete, but only with first gen mobile i3/i5/i7.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

No offense to the LMG team, but this was one of those videos that was a complete fail.

 

Let me elaborate:

Benchmarks and games don't do this video justice. Even having the same clocks and core count, Luke forgot one important thing:

Instruction set extensions! Since the 5960X has AVX1/2 and SSE3+ and the Q6600 doesn't.

 

It is like comparing apples to oranges. 

 

The video was supposed to be in regards to "all MHz" being created equal, which technically all MHz are equal (its just a multiple of 1/second). 

 

It doesn't stop at just the instruction set extension. What about microprocessor architectural changes and how instructions are piped through the scheduler?

You guys missed out on some big points. 

 

Also, there was a fallacy in benchmarking. If you truly wanted to compare Mhz to Mhz, then you should have benchmarked using the same instruction set extensions on both chips, ie: SSE3. I'd be curious to see what difference in performance is when both benchmarks are running on the same instruction set extension.

 

Sorry about the rant, just wanted express my opinion that this video wasn't "up to snuff" (imho).

 

A bit more research or even bringing up these points in the video would have been sufficient to say the least.

There actually is that point, what about instruction sets? I know from experience that if your missing one that performance can suffer big time (Pentium III's without SSE2, with it they'd actually still be fast enough to suit low end computers after seeing my dual PIII do everything my main rig can do).

Edit: I'll upload my own benchmarks with a QX6850 and i5 4440, I might even cut the i5 back to 2 cores and compare it to a Pentium E6500K

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

The lack of a control for the memory is an obvious methodology oversight.

 

 

DDR2 memory will be significantly slower than DDR4, even before we factor in that DDR4.  Cinebench for example shows that it scales with RAM speed.

 

https://youtu.be/-RE92gW6rD4?t=85

 

Now if we consider that DDR4 is a significant jump from DDR2...

 

6619_022_crucial_ddr4_memory_performance

 

I think you could certainly account for much of the score difference in Cinebench and Winrar due to the memory alone.  The way this could be mostly controlled for would be dropping the speed of the memory 

 

Doing this test at 4K was also silly as others have pointed out.  That's going to minimize any difference between the two CPU's.  

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

The lack of a control for the memory is an obvious methodology oversight.

 

 

DDR2 memory will be significantly slower than DDR4, even before we factor in that DDR4.  Cinebench for example shows that it scales with RAM speed.

 

https://youtu.be/-RE92gW6rD4?t=85

 

Now if we consider that DDR4 is a significant jump from DDR2...

 

6619_022_crucial_ddr4_memory_performance

 

I think you could certainly account for much of the score difference in Cinebench and Winrar due to the memory alone.  The way this could be mostly controlled for would be dropping the speed of the memory 

 

Doing this test at 4K was also silly as others have pointed out.  That's going to minimize any difference between the two CPU's.  

At what speed and latency does DDR2 match DDR3? I'm serious about doing this myself and putting the results up (GTX 970 will be the card). 

 

Edit: DDR 3 800 vs DDR2 1066?

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

At what speed and latency does DDR2 match DDR3? I'm serious about doing this myself and putting the results up (GTX 970 will be the card). 

 

Edit: DDR 3 800 vs DDR2 1066?

 

  I would say the only way to do it is the fastest DDR2 RAM you can find vs the slowest DDR3 RAM is about the best it gets.  

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

  I would say the only way to do it is the fastest DDR2 RAM you can find vs the slowest DDR3 RAM is about the best it gets.  

Hmm, it will have to do then, I can't get my DDR3 kit slower than 800MHz, and I've heard that at a low latency such as 4-4-4-12 DDR2 800 is as fast as DDR2 1066 at a higher latency.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the cinebench thread plug :D

Keep your 2600K, it can still keep up with hasweasel single thread when overclocked. 

Rig Specs:

AMD Threadripper 5990WX@4.8Ghz

Asus Zenith III Extreme

Asrock OC Formula 7970XTX Quadfire

G.Skill Ripheartout X OC 7000Mhz C28 DDR5 4X16GB  

Super Flower Power Leadex 2000W Psu's X2

Harrynowl's 775/771 OC and mod guide: http://linustechtips.com/main/topic/232325-lga775-core2duo-core2quad-overclocking-guide/ http://linustechtips.com/main/topic/365998-mod-lga771-to-lga775-cpu-modification-tutorial/

ProKoN haswell/DC OC guide: http://linustechtips.com/main/topic/41234-intel-haswell-4670k-4770k-overclocking-guide/

 

"desperate for just a bit more money to watercool, the titan x would be thankful" Carter -2016

Link to comment
Share on other sites

Link to post
Share on other sites

No, no they're not.

 

A comparison between Intel's latest CPU'so to AMD's latest CPUs would be nice too.

Just remember: Random people on the internet ALWAYS know more than professionals, when someone's lying, AND can predict the future.

i7 9700K (5.2Ghz @1.2V); MSI Z390 Gaming Edge AC; Corsair Vengeance RGB Pro 16GB 3200 CAS 16; H100i RGB Platinum; Samsung 970 Evo 1TB; Samsung 850 Evo 500GB; WD Black 3 TB; Phanteks 350x; Corsair RM19750w.

 

Laptop: Dell XPS 15 4K 9750H GTX 1650 16GB Ram 256GB SSD

Spoiler

sex hahaha

Link to comment
Share on other sites

Link to post
Share on other sites

The lack of a control for the memory is an obvious methodology oversight.

 

 

DDR2 memory will be significantly slower than DDR4, even before we factor in that DDR4.  Cinebench for example shows that it scales with RAM speed.

 

https://youtu.be/-RE92gW6rD4?t=85

 

Now if we consider that DDR4 is a significant jump from DDR2...

 

 

 

I think you could certainly account for much of the score difference in Cinebench and Winrar due to the memory alone.  The way this could be mostly controlled for would be dropping the speed of the memory 

 

Doing this test at 4K was also silly as others have pointed out.  That's going to minimize any difference between the two CPU's.  

 

It's simple, faster RAM will benefit bandwidth intensive applications like file compression or image editing software. Otherwise it won't have much of effect due to cache locality.

Link to comment
Share on other sites

Link to post
Share on other sites

It's simple, faster RAM will benefit bandwidth intensive applications like file compression or image editing software. Otherwise it won't have much of effect due to cache locality.

You'd be surprised.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Surprise me, then.

I will, as I said I'm going to re-do this myself with more accurate setups.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I will, as I said I'm going to re-do this myself with more accurate setups.

 

Cool. I'll be waiting.

Link to comment
Share on other sites

Link to post
Share on other sites

 Was the Q6600 running on 8GBs of DDR2 or DDR3? I just upgraded from a Xeon X3363 with 8GBs of DDR2 @ 800Mhz to a i7 4790K. My GPU is a GTX 750 Ti and I migrated it over to the new system. On my old system, I could play Guild Wars 2 on Low/Med @ 1080P with 30FPS being my max. On my new system, with the same GPU, I am able to play on high everything at 45-70 FPS. I never realized how much my system was holding back my GPU until after the upgrade.

CPU: Core i7 4970K | MOBO: Asus Z87 Pro | RAM: 32GBs of G.Skill Ares 1866 | GPU: MSI GAMING X GTX 1070 | STOR: 2 X Crucial BX100 250GB, 2 x WD Blk 1TB (mirror),WD Blk 500GB | CASE: Cooler Master HAF 932 Advanced | PSU: EVGA SUPERNOVA G2 750W | COOL: Cooler Master Hyper T4 | DISP: 21" 1080P POS | KB: MS Keyboard | MAU5: Redragon NEMEANLION | MIC: Snowball Blue | OS: Win 8.1 Pro x64, (Working on Arch for dual boot) |

Link to comment
Share on other sites

Link to post
Share on other sites

 Was the Q6600 running on 8GBs of DDR2 or DDR3? I just upgraded from a Xeon X3363 with 8GBs of DDR2 @ 800Mhz to a i7 4790K. My GPU is a GTX 750 Ti and I migrated it over to the new system. On my old system, I could play Guild Wars 2 on Low/Med @ 1080P with 30FPS being my max. On my new system, with the same GPU, I am able to play on high everything at 45-70 FPS. I never realized how much my system was holding back my GPU until after the upgrade.

I think it was DDR3.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×