Jump to content

Testing Cinebench R15

I debated if I should post this in the hardware or software section. Originally my intent was to investigate the hardware factors that affect Cinebench scores. After starting, it became apparent things were the other way around. I was using the hardware to test the software.

 

Anyway, following are the test results so far.

 

Test system:

i7-6700k fixed at 4 GHz (turbo off), HT on or off, 2 or 4 cores.

Ram: 4 modules running at either 2133 15-15-15-35 or 3000 16-18-18-38 to represent standard ram and mid range performance ram

Mobo: MSI Gaming Pro, BIOS 1.9. EIST on, C-states off.

GPU: integrated

OS: Windows 7 64-bit

 

Cinebench was run 3 times each and the best score used.

 

Results (CPU as cores/threads, ram speed, CPU score):

4C8T, 3000, 732

4C8T, 2133, 727

2C4T, 3000, 291

2C4T, 2133, 291

4C4T, 3000, 434

 

So, running 4 cores 8 threads, ram at 3000 was 0.7% faster than 2133, so pretty insignificant. Chopping down to 2 cores 4 threads, I'd expect a halving but it was worse than that, about 2.5 ratio. Similarly I'm seeing 69% higher performance with HT on compared to off for 4 cores. This is wrong...

 

I did note, when starting Cinebench and doing nothing, Cinebench was taking up a fair bit of CPU. If this isn't released it suggests it is eating into the available power. I'm wondering if the iGPU is a factor in this...

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

or you could post this in the cinebench thread?

 

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, DXMember said:

or you could post this in the cinebench thread?

I'm not submitting scores in that way as screenshots would be redundant and pointlessly time consuming. I want to spend the time in looking at how things affect the score.

 

Having had a little more thinking, something isn't right as this doesn't match other testing I did on another system. For now I can only suspect the iGPU is involved in some way, so I'll try and dig out some GPU and see if that affects results.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, porina said:

I'm not submitting scores in that way as screenshots would be redundant and pointlessly time consuming. I want to spend the time in looking at how things affect the score.

 

Having had a little more thinking, something isn't right as this doesn't match other testing I did on another system. For now I can only suspect the iGPU is involved in some way, so I'll try and dig out some GPU and see if that affects results.

that's exactly why you have to post it in that thread

basically I manage to get about 3~5% better scores consistently tweaking windows services and device drivers

 

more cores and more threads are better and we already know how CPU affects the performance,

here's a great article to learn how memory affects the performance: https://www.akkadia.org/drepper/cpumemory.pdf

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, DXMember said:

here's a great article to learn how memory affects the performance: https://www.akkadia.org/drepper/cpumemory.pdf

Looks like an interesting document and I'll look forward to spending time on that later. I have done extensive testing on the ram performance of Prime95 already and that is very memory sensitive. Not done there yet either! Cinebench doesn't seem to be as influenced.

 

Anyway, it does look like the iGPU was in some way affecting things. I have dug out and fitted an ancient 7300 GT, updated drivers, and the mysterious CPU usage has gone away. Time to re-run the results...

 

I'm still not convinced in my involvement in the main Cinebench thread at this stage. The requirement to post screenshots makes it a non-starter given I want to test myriad configurations, and finding useful historic data in there doesn't look likely either. Perhaps once I have something more to go on.

 

I'm sure I'm retracing old ground here, that this is something others may have done before, but I want the hard numbers to back up anything.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, porina said:

Looks like an interesting document and I'll look forward to spending time on that later. I have done extensive testing on the ram performance of Prime95 already and that is very memory sensitive. Not done there yet either! Cinebench doesn't seem to be as influenced.

 

Anyway, it does look like the iGPU was in some way affecting things. I have dug out and fitted an ancient 7300 GT, updated drivers, and the mysterious CPU usage has gone away. Time to re-run the results...

 

I'm still not convinced in my involvement in the main Cinebench thread at this stage. The requirement to post screenshots makes it a non-starter given I want to test myriad configurations, and finding useful historic data in there doesn't look likely either. Perhaps once I have something more to go on.

 

I'm sure I'm retracing old ground here, that this is something others may have done before, but I want the hard numbers to back up anything.

p95 has multiple settings some don't use RAM at all, some are heavy on RAM

 

you should post there or at list post a link to this thread in there for people to see your test results so we all can benefit and tweak for higher scores

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, DXMember said:

p95 has multiple settings some don't use RAM at all, some are heavy on RAM

 

Been there, done that. I have a good model on the impact of how Prime95 like tasks is impacted between total CPU power and general ram bandwidth. Real tasks in Prime95 are using 4096k FFT which are very memory influenced. I'm not totally done yet as I'm still missing a good understanding on why dual rank can often give a far greater impact on performance than simple ram bandwidth measures suggests, and I know it isn't significantly affected by latency. If you can add to that it would be worth resurrecting that thread.

 

Anyway, this thread is about Cinebench so back to testing that...

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

if find that my hardware could be interesting for your results, I'm OK to run some tests

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

4C8T, 3000 2ch 2R, 891

4C8T, 2133 2ch 2R, 887

4C8T, 2133 1ch 1R, 878

2C4T, 3000 2ch 2R, 449

2C4T, 2133 2ch 2R, 448

4C4T, 3000 2ch 2R, 684

 

Ok, these results look better.

 

For 4C8T, 3000 ram is 0.5% faster than 2133 dual channel dual rank, and 1.5% faster than 2133 single channel single rank. So ram speed really has a minimal impact here. Maybe more cores and/or much faster CPU speeds will start demanding more and show a greater difference.

 

Comparing running 2C4T to 4C8T with 3000 ram speed, the latter give around 98% more performance so good scaling here. The minor difference is probably a manifestation of the ram influence again, as we just doubled the available bandwidth for the CPU power.

 

Comparing hyper-threading, or 4C4T to 4C8T, that gives about 30% boost. Not bad.

 

I'm sure regular Cinebenchers already know this, but the general conclusion would be: more total CPU power helps (cores * clock) first. Ram has a minor influence, but faster is still better.

 

Is it worth me running a similarly clocked 6600k? The difference there would be having only 6MB of L3 cache compared to 8MB of the i7, if we disregard the HT bonus. Actually, I could also repeat this on other architectures and see what the generational scaling is. Back in a bit :) 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Ok, lots of testing done, following is a summary of the observations:

 

Hyper-threading - how much does it help? On Skylake, I saw 31% boost for on vs off with 4 cores, or 28% if you normalise to 1 core. Haswell was similar with 33% and 28% respectively although I'm using an i3 and i5 here. For Sandy Bridge i7 mobile 29% and 19% respectively.

 

Skylake cache - putting aside HT, does having 8MB of L3 cache on the i7 help over 6MB cache on i5? Nope. Results were practically identical.

 

Generational differences. Based on 1 thread results as least limiting by other factors, Skylake is about 7.4% faster than Broadwell. Broadwell is about 3.9% faster than Haswell. Haswell is about 8.5% faster than Sandy Bridge.

 

Still with 1 thread results, it does seem to normalise well with the following as points per CPU GHz:

Skylake 43.9 (56.1 with HT)

Broadwell 40.9

Haswell 39.3 (50.3 with HT)

Sandy Bridge 36.3 (43.2 with HT)

This could also be multiplied up to predict where multi-core scores end up, but this wont take into consideration less than perfect scaling.

 

On two systems (Skylake, Broadwell) with Windows 7 and and Intel integrated graphics, I saw Cinebench use up a ton of CPU even when the benchmark wasn't running. This crippled results, and the effect disappeared after I installed a separate GPU. However a Sandy Bridge system didn't show this, nor did a Haswell Win10 system. I didn't dig into this further.

 

Test systems:

i7-6700k 4.0 GHz, 3000 dual channel dual rank

i7-6700k 4.2 GHz, 3200 dual channel dual rank

i5-6600k 3.6-4.0 GHz, 3000 dual channel single rank

i5-5670C 3.5 GHz, 1600 dual channel dual rank

i5-4570S 3.2-3.5 GHz, 1600 dual channel dual rank

i3-4150T 3.0 GHz, 1600 dual channel single rank

i7-2620M 3.2-3.4 GHz, probably 1333 dual channel but can't be sure

 

Next step, I will go into the mega benchmark thread and see how the above observations fit into other submitted results.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×