Jump to content

Intel Skylake Core i7-6700K Gaming Performance Benchmarks

HKZeroFive

I'll likely upgrade my 2500k to a Skylake CPU.

I've been itching to upgrade and this'll also allow me to build one of my friends a nice PC with all the spare parts I have.

Link to comment
Share on other sites

Link to post
Share on other sites

I'll likely upgrade my 2500k to a Skylake CPU.

I've been itching to upgrade and this'll also allow me to build one of my friends a nice PC with all the spare parts I have.

Even if the CPU itself is not that large of a performance difference, the motherboard features should be quite a nice gain in my eyes. M.2, USB 3.1 (some boards have them), and the additional PCI lanes should serve quite useful.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

More interested in integrated graphics...  

 

Looking forward when a CPUs integrated graphics are as powerful as the latest x60 nvidia and x70 amd video card.

 

Thats the day, i can get rid of the tower entirely and just mount a small gaming capable zbox on the back of a huge 21:9 monitor with all the adaptive sync and shit.

Link to comment
Share on other sites

Link to post
Share on other sites

Well, we know Skylake is now /dissapoint.

Irispro is good, but at this point, intel will be a hard sell for budget builds due to the price of DDR4 still being a bit high over DDR3. Which means overall cost for intel, despite excellent iGPU, will atleast for now be a tad high.

By christmas, intel will surpass AMD at budget builds due to lower DDR4 prices (faster adoption and more volume sales means faster decline in price).

 

IT ALL RIDES ON ZEN. AMD, the ball is in your court.

 

 

looking at the rankings list...

 
1
29607 KB/s
2
25042 KB/s
3
24214 KB/s
4
22349 KB/s
5
22065 KB/s
6
16557 KB/s
7
15028 KB/s
8
13886 KB/s
9
10443 KB/s
10
10417 KB/s
 
 
wait... rank 9 is a FX 8320? Yes it is not a World rank... but look at those speeds (look at dat OC... WTF... i knew 4.5 to 4.6GHz was common, my own FX happily did 4.52GHz, 4.7GHz was uncommon but not unheard of, 4.8GHz and above were pretty rare... 5GHZ stable OC, ON AIR?!?? wtf is this sorcery.
 
Link to comment
Share on other sites

Link to post
Share on other sites

Those benchmarks seem fishy. Why is the i7 5820K getting beaten real time in fry cry 4.

Cause it's low clock. It does get raped in most games because its 6 cores is under utilized, while having low clock speed.

Also that photoshop benchmark rofl. Beating everything by miles.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I still think your money would be better spent on X99 and not Z170 if you're going for flexibility. (Also X99 mobos seem to look better.)

The photoshop benchmark not sure how they were testing it, did they overclock it but not the others or what the deal is with that.

If they are comparing it by stock speeds that's not quite fair considering the 5820K can go to at least 4.5GHz

a Moo Floof connoisseur and curator.

:x@handymanshandle x @pinksnowbirdie || Jake x Brendan :x
Youtube Audio Normalization
 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

that's very disappointing. :(

Recovering Apple addict

 

ASUS Zephyrus G14 2022

Spoiler

CPU: AMD Ryzen 9 6900HS GPU: AMD r680M / RX 6700S RAM: 16GB DDR5 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I still think your money would be better spent on X99 and not Z170 if you're going for flexibility. (Also X99 mobos seem to look better.)

The photoshop benchmark not sure how they were testing it, did they overclock it but not the others or what the deal is with that.

Remember skylake has this thing called e-dram and also has newer instruction sets so perhaps Photoshop uses either very well.

As to overclock, all of the cpus are at stock if you read the article.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

that's very disappointing. :(

Not being the downer, but I don't think many people realize how terribly underutilized cpus are for most programs.

Lazy devs.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

typical intel giving us not crap for an upgrade...unless it can reach the rumored overclock of 5.2 ghz and up im not upgrading for a long time.

cpu:i7-4770k    gpu: msi reference r9 290x  liquid cooled with h55 and hg10 a1     motherboard:z97x gaming 5   ram:gskill sniper 8 gb

Link to comment
Share on other sites

Link to post
Share on other sites

I still think your money would be better spent on X99 and not Z170 if you're going for flexibility. (Also X99 mobos seem to look better.)

The photoshop benchmark not sure how they were testing it, did they overclock it but not the others or what the deal is with that.

If they are comparing it by stock speeds that's not quite fair considering the 5820K can go to at least 4.5GHz

Most people do not overclock... but also argument could be made, that it is not fair to compare a 4core vs a 6core, a $300 vs $450 cpu...

Once you start OC, nothing is fair, since not everyone will get the same results.

Link to comment
Share on other sites

Link to post
Share on other sites

Those benchmarks seem fishy. Why is the i7 5820K getting beaten real time in fry cry 4.

single core performance of the higher core cpus usually aren't as good as a 4790k. It has more cores, but games rarely use more than 2 threads, so its better to have really good single core performance for gaming.

Cpu: Ryzen 2700 @ 4.0Ghz | Motherboard: Hero VI x370 | Gpu: EVGA RTX 2080 | Cooler: Custom Water loop | Ram: 16GB Trident Z 3000MHz

PSU: RM650x + Braided cables | Case:  painted Corsair c70 | Monitor: MSI 1440p 144hz VA | Drives: 500GB 850 Evo (OS)

Laptop: 2014 Razer blade 14" Desktop: http://imgur.com/AQZh2sj , http://imgur.com/ukAXerd

 

Link to comment
Share on other sites

Link to post
Share on other sites

Most people do not overclock... but also argument could be made, that it is not fair to compare a 4core vs a 6core, a $300 vs $450 cpu...

Once you start OC, nothing is fair, since not everyone will get the same results.

Ah true.

Either way I think if the 5820K hasn't been updated to Broadwell-E or Skylake-E by mid 2016 it's still a good option.

a Moo Floof connoisseur and curator.

:x@handymanshandle x @pinksnowbirdie || Jake x Brendan :x
Youtube Audio Normalization
 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

People still except CPU's to get exponentially faster every year... Ha, not until we get SDRAM and cache misses become less frequent because of it.

Link to comment
Share on other sites

Link to post
Share on other sites

Not being the downer, but I don't think many people realize how terribly underutilized cpus are for most programs.

Lazy devs.

Not really underutilized more than programmers just not caring about about the performance of their programs. Not many devs looks at the dis-assembly produced by the compiler and compare it to the instructions they would would have needed to take place for a give piece of code to execute. Not many people thing about maximally utilizing the cache, or writing their algorithms in a way that best fits performance instead of best fitting a programming paradigm they mentally subscribe to. The rise of higher level, interpreted, garbage collected programming languages that run in virtual machines doesn't help this in any way. Sadly very few devs care about the performance of their software because they would rather have the users time be spend executing their slow software as opposed to their own time spend writing better code. In a world where CPU's got exponentially faster each year, the data workload didn't increase as much, and resources such as power weren't constrained on mobile devices, the less this would be a problem. Sadly, none of those things are true. CPU's have stagnated in the past few years, memory workloads keep getting bigger, and the fastest growing place in the industry (mobile) has severe constraints on resources.

Link to comment
Share on other sites

Link to post
Share on other sites

Not really underutilized more than programmers just not caring about about the performance of their programs. Not many devs looks at the dis-assembly produced by the compiler and compare it to the instructions they would would have needed to take place for a give piece of code to execute. Not many people thing about maximally utilizing the cache, or writing their algorithms in a way that best fits performance instead of best fitting a programming paradigm they mentally subscribe to. The rise of higher level, interpreted, garbage collected programming languages that run in virtual machines doesn't help this in any way. Sadly very few devs care about the performance of their software because they would rather have the users time be spend executing their slow software as opposed to their own time spend writing better code. In a world where CPU's got exponentially faster each year, the data workload didn't increase as much, and resources such as power weren't constrained on mobile devices, the less this would be a problem. Sadly, none of those things are true. CPU's have stagnated in the past few years, memory workloads keep getting bigger, and the fastest growing place in the industry (mobile) has severe constraints on resources.

Underutilized is perfectly valid imho given that almost no programs exist that even take advantage of 2 or more cores (gaming is a massive microcosm of this issue), and devs are perfectly happy to utilize old inefficient instruction sets instead. If consumer programs actually leveraged the full suite of instruction sets/cores available to them, I guarantee you there would be massively increased incentives for say intel to start producing 6-8 core consumer devices or actually bringing forward AVX-512.

 

Eventually however we will get (and are actually approaching that fairly rapidly) to the point where the only way to increase performance will be shoving clock speeds and voltages up (although silicon isn't likely to be the medium that allows it), which will be a very interesting time. GHz wars 2.0

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Why are we trusting shady Chinese websites and their "benchmarks" again?

CPU: i5 4670k | Motherboard: MSI B85I | Stock cooler | RAM: 8gb DDR3 RAM 1600mhz | GPU: EVGA GTX 770 Superclocked w/ACX cooling | Storage: 1TB Western Digital Caviar Black | Case: Fractal Design Define R4 w/ Window

Link to comment
Share on other sites

Link to post
Share on other sites

Why are we trusting shady Chinese websites and their "benchmarks" again?

PCOnline has leaked quite a few accurate benchmarks before, one of them being the 290X before it was released. I've also added some images of the CPU itself from their website. It wouldn't make sense if they have the actual CPU and just put out dodgy benchmarks.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to comment
Share on other sites

Link to post
Share on other sites

Ah true.

Either way I think if the 5820K hasn't been updated to Broadwell-E or Skylake-E by mid 2016 it's still a good option.

If I was to build from scratch today, I would agree with you there. My previous lga 1366 build served me 6 years very well, i7 920 at 3.8ghz for 6 years with speed step and all the power saving features enbled, not bad imo.

System still works, just sitting in closet enjoying retirement :-)

Link to comment
Share on other sites

Link to post
Share on other sites

PCOnline has leaked quite a few accurate benchmarks before, one of them being the 290X before it was released. I've also added some images of the CPU itself from their website. It wouldn't make sense if they have the actual CPU and just put out dodgy benchmarks.

 

At least its not TechBang - Paul from Newegg 2015

CPU: i5 4670k | Motherboard: MSI B85I | Stock cooler | RAM: 8gb DDR3 RAM 1600mhz | GPU: EVGA GTX 770 Superclocked w/ACX cooling | Storage: 1TB Western Digital Caviar Black | Case: Fractal Design Define R4 w/ Window

Link to comment
Share on other sites

Link to post
Share on other sites

They use WinRAR as a benchmark? lol

Data compression is serious business. Haven't you watched Silicon Valley?

Link to comment
Share on other sites

Link to post
Share on other sites

They use WinRAR as a benchmark? lol

Winrar is a very good overall performance indicator that almost everyone is using.

 

Plus, it actually makes use of hyperthreadings and that makes it one of the few applications that you can see great performance increase (up to 40~50%) while comparing 4690k vs 4790k with the same clock that everyone can relate themselves with.

Link to comment
Share on other sites

Link to post
Share on other sites

*sprinkles some salt over the body*

Savage

4690K // 212 EVO // Z97-PRO // Vengeance 16GB // GTX 770 GTX 970 // MX100 128GB // Toshiba 1TB // Air 540 // HX650

Logitech G502 RGB // Corsair K65 RGB (MX Red)

Link to comment
Share on other sites

Link to post
Share on other sites

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×