Jump to content

Intel Preparing Multiple Hexacore Coffee Lake CPUs

HKZeroFive
34 minutes ago, Zodiark1593 said:

With fast RAM, Skylake stretches a bit further. Haswell didn't seem to care too much about RAM so long as it wasn't ridiculously slow. 

That's because DDR4 can maintain higher clocks without sacrificing on timings like no tomorrow. Then there's the iGPU and its resources being able to help with certain tasks. That's why the 5775C can punch above its belt and outpace a 6700/7700K at the same or slightly slower clocks in certain tasks (128mb of DRAM used as L4 cache).

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Drak3 said:

That's because DDR4 can maintain higher clocks without sacrificing on timings like no tomorrow. Then there's the iGPU and its resources being able to help with certain tasks. That's why the 5775C can punch above its belt and outpace a 6700/7700K at the same or slightly slower clocks in certain tasks (128mb of DRAM used as L4 cache).

I'd like to get my hands on a 5775C myself, perhaps more so than a 4790K, though the expense is quite high. The uniqueness factor is almost worth the premium. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Zodiark1593 said:

With fast RAM, Skylake stretches a bit further. Haswell didn't seem to care too much about RAM so long as it wasn't ridiculously slow. 

That's not really true either. Both Haswell and Broadwell scaled just as well with memory overclocking as Skylake and Kaby does. I don't get why people act like this is some sort of recent thing that is mutually exclusive to newer architectures. 

4 hours ago, PCGuy_5960 said:

Skylake's IPC is 10% better than Boradwell's so it's not impossible, I believe that we will see a 5-10% IPC increase with Coffeelake :D

It's not 10%. It's lucky to be 7%. Considering Coffeelake is a "refine" just like Devil's Canyon was to Haswell, I am once again not expecting an IPC gain. It's still 14nm, it's still based on Skylake, and at best, they've improved efficiency a little.

 

We normally only see IPC gains when Intel brags heavily about it. They've yet to brag about IPC at all from Coffeelake.

5 hours ago, PCGuy_5960 said:

We can't be 100% sure though.... Anyways, I made this based on the leaks:

8700Kcb.png.9f6eb6e5b071b7f9e3f9434545df8f7b.png

The 8700K will be a very fast CPU :D

This is some heavy extrapolation to assume 5.3ghz AND an IPC boost. Someone is setting themselves up for disappointment early on, lol. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, MageTank said:

This is some heavy extrapolation to assume 5.3ghz AND an IPC boost. Someone is setting themselves up for disappointment early on, lol. 

Well, it is 14nm++, so 5.3GHz may be possible. :D And I hope that we will get a 5% improvement

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, PCGuy_5960 said:

Well, it is 14nm++, so 5.3GHz may be possible. :D And I hope that we will get a 5% improvement

You have to understand that as the core count increases, it becomes increasingly difficult to increase the core clock. To expect better overclocking than what Kaby Lake averaged, while also adding 50% more cores, seems like a little much. I would imagine 5-5.1ghz. Maybe under the best loops, 5.2ghz from a golden bin, but even that might be pushing it. 

 

If you are interested in patterns, Intel did say "15% over Kaby" in Sysmark, just like they did last year with Kaby over Skylake:

 

Intel-8th-Generation-Core-i7-8000-Series

 

However, this data is always deceptive. It's derived from using mobile processors, with a clear difference in core clocks AND turbo functions. The reason Kaby was 15% faster than Skylake, was a 7% difference in core clocks AND a superior speedshift. The fact that they are making these claims again, tells me we are probably seeing the exact same thing. Another clock speed difference and slight alteration to turbo functionality to improve responsiveness. 

 

I don't really expect an IPC boost until Cannonlake, when we get the 10nm shrink. Even then, I am not expecting that big of a difference. We've reached a point where it's become increasingly difficult to pull IPC out of thin air (not to mention we all have a twisted idea on what IPC actually means). Either way, if we get 6 cores, 12 threads that can do similar clock speeds to Kaby, and hopefully have it be compatible with current chipsets, that would be plenty enough for me. If this requires new chipsets and new motherboards, it's going to fail pretty spectacularly in my eyes. After all, this marks the 4th 14nm architecture Intel has pushed out, and it's getting rather stale at this point. If people are going to have to buy a new board/chipset for 6 core/12 threads, they may as well invest in the X299 platform at that point. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Jito463 said:

The thing is, if Intel did that (which I doubt they would), it could spark a real price war between AMD and Intel.  The margins AMD has using their modular system in Ryzen - coupled with the 80%+ of fully working Zen dies - could allow AMD to drop prices considerably and still make a profit.  Intel is still using a monolithic design, which works for them, but is more expensive to produce.  There's no doubt that AMD could continually drop prices lower than Intel would be willing - or able - to do.

That will probably change as EUV fabs start coming online though, one of the reasons for poor yield in intel's process is they have to use extreme multipatterning due to ArF light source's inability to scale down to 10nm architectures. That adds a lot of cost. but once EUV is functioning at scale, designing and manufacturing chips with features as small as 1nm will be much cheaper.

Corsair 600T | Intel Core i7-4770K @ 4.5GHz | Samsung SSD Evo 970 1TB | MS Windows 10 | Samsung CF791 34" | 16GB 1600 MHz Kingston DDR3 HyperX | ASUS Formula VI | Corsair H110  Corsair AX1200i | ASUS Strix Vega 56 8GB Internet http://beta.speedtest.net/result/4365368180

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MageTank said:

You have to understand that as the core count increases, it becomes increasingly difficult to increase the core clock. To expect better overclocking than what Kaby Lake averaged, while also adding 50% more cores, seems like a little much. I would imagine 5-5.1ghz. Maybe under the best loops, 5.2ghz from a golden bin, but even that might be pushing it. 

 

If you are interested in patterns, Intel did say "15% over Kaby" in Sysmark, just like they did last year with Kaby over Skylake:

 

Intel-8th-Generation-Core-i7-8000-Series

 

However, this data is always deceptive. It's derived from using mobile processors, with a clear difference in core clocks AND turbo functions. The reason Kaby was 15% faster than Skylake, was a 7% difference in core clocks AND a superior speedshift. The fact that they are making these claims again, tells me we are probably seeing the exact same thing. Another clock speed difference and slight alteration to turbo functionality to improve responsiveness. 

 

I don't really expect an IPC boost until Cannonlake, when we get the 10nm shrink. Even then, I am not expecting that big of a difference. We've reached a point where it's become increasingly difficult to pull IPC out of thin air (not to mention we all have a twisted idea on what IPC actually means). Either way, if we get 6 cores, 12 threads that can do similar clock speeds to Kaby, and hopefully have it be compatible with current chipsets, that would be plenty enough for me. If this requires new chipsets and new motherboards, it's going to fail pretty spectacularly in my eyes. After all, this marks the 4th 14nm architecture Intel has pushed out, and it's getting rather stale at this point. If people are going to have to buy a new board/chipset for 6 core/12 threads, they may as well invest in the X299 platform at that point. 

Apparently it is compatible with Z270, though still no word on Z170, but I'd assume so with a BIOS update, just like any other refinement.

i7 6700K @ Stock (Yes I know) ~~~ Corsair H80i GT ~~~ GIGABYTE G1 Gaming Z170X Gaming 7 ~~~ G. Skill Ripjaws V 2x8GB DDR4-2800 ~~~ EVGA ACX 3.0 GTX 1080 SC @ 2GHz ~~~ EVGA P2 850W 80+ Platinum ~~~ Samsung 850 EVO 500GB ~~~ Crucial MX200 250GB ~~~ Crucial M500 240GB ~~~ Phanteks Enthoo Luxe

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Noirgheos said:

Apparently it is compatible with Z270, though still no word on Z170, but I'd assume so with a BIOS update, just like any other refinement.

Any source on that? Only thing I've seen was a Geekbench result that showed a Z270 chipset, but DMI strings can easily be edited. Not only that, but it wouldn't surprise me if Intel tested on Z270, but artificially locked it to a 300 series chipset. They are heavy on artificial locks.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, MageTank said:

Any source on that? Only thing I've seen was a Geekbench result that showed a Z270 chipset, but DMI strings can easily be edited. Not only that, but it wouldn't surprise me if Intel tested on Z270, but artificially locked it to a 300 series chipset. They are heavy on artificial locks.

 

That's it. They were right about Ryzen a while back.

i7 6700K @ Stock (Yes I know) ~~~ Corsair H80i GT ~~~ GIGABYTE G1 Gaming Z170X Gaming 7 ~~~ G. Skill Ripjaws V 2x8GB DDR4-2800 ~~~ EVGA ACX 3.0 GTX 1080 SC @ 2GHz ~~~ EVGA P2 850W 80+ Platinum ~~~ Samsung 850 EVO 500GB ~~~ Crucial MX200 250GB ~~~ Crucial M500 240GB ~~~ Phanteks Enthoo Luxe

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MageTank said:

You have to understand that as the core count increases, it becomes increasingly difficult to increase the core clock. To expect better overclocking than what Kaby Lake averaged, while also adding 50% more cores, seems like a little much. I would imagine 5-5.1ghz. Maybe under the best loops, 5.2ghz from a golden bin, but even that might be pushing it. 

 

If you are interested in patterns, Intel did say "15% over Kaby" in Sysmark, just like they did last year with Kaby over Skylake:

 

Intel-8th-Generation-Core-i7-8000-Series

 

However, this data is always deceptive. It's derived from using mobile processors, with a clear difference in core clocks AND turbo functions. The reason Kaby was 15% faster than Skylake, was a 7% difference in core clocks AND a superior speedshift. The fact that they are making these claims again, tells me we are probably seeing the exact same thing. Another clock speed difference and slight alteration to turbo functionality to improve responsiveness. 

 

I don't really expect an IPC boost until Cannonlake, when we get the 10nm shrink. Even then, I am not expecting that big of a difference. We've reached a point where it's become increasingly difficult to pull IPC out of thin air (not to mention we all have a twisted idea on what IPC actually means). Either way, if we get 6 cores, 12 threads that can do similar clock speeds to Kaby, and hopefully have it be compatible with current chipsets, that would be plenty enough for me. If this requires new chipsets and new motherboards, it's going to fail pretty spectacularly in my eyes. After all, this marks the 4th 14nm architecture Intel has pushed out, and it's getting rather stale at this point. If people are going to have to buy a new board/chipset for 6 core/12 threads, they may as well invest in the X299 platform at that point. 

AVX 512 should be a good boost in itself for encoding and rendering applications. Unless someone wants to try running raytrace effects with it however (good luck), gaming will probably see jack squat. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Zodiark1593 said:

AVX 512 should be a good boost in itself for encoding and rendering applications. Unless someone wants to try running raytrace effects with it however (good luck), gaming will probably see jack squat. 

Should see almost double the improvement, given AVX512 allows you to do twice the bit-ops/clock. It's essentially double the bit-width. Just need enough memory bandwidth to keep it fed at that point, which quad channel (and the amazing IMC those chips have) should easily provide. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Intel is still being lazy if they can't have coffee lake clocked even as high as skylake, so it looks like I won't be building a PC or getting a new laptop anytime soon (or at least until zen 2 comes out). I wouldn't expect the i7 so go much more than 4.5 ghz according to these base clocks. What really pisses me off is the idea of bringing base clocks of mobile i7's down to 2 ghz, my 4720hq can do 3.675 ghz (all core load) without any bios modding and is just as fast if not faster than a 7700hq, so why would they not give the option to raise tdp a little bit so laptop manufacturers that make actually decent cooling systems can have something similar to a fully clocked 8700. It astounds me how stupid this is of intel to screw over the mobile platform like this.

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Taf the Ghost said:

Is Coffee Lake getting a new AVX512 module?

Considering Intel artificially crippled the AVX performance of the 6-8c Skylake-X CPU's, I highly doubt that. If you look at the reviews of the 6-8c Skylake-X CPU's, their AVX performance makes zero sense relative to the rest of the lineup. It's clear Intel tried to differentiate the product stack with artificial limitations. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, TheDankKoosh said:

What really pisses me off is the idea of bringing base clocks of mobile i7's down to 2 ghz, my 4720hq can do 3.675 ghz (all core load) without any bios modding and is just as fast if not faster than a 7700hq, so why would they not give the option to raise tdp a little bit so laptop manufacturers that make actually decent cooling systems can have something similar to a fully clocked 8700. 

Because that covers like 2 models that get more reviews than sales. 

The vast  majority of laptop OEMs will end up putting those inside paper-thin devices, so you may as well just report the actual clicks they're going to run at instead of nominally inflating it only to have it throttle everywhere. 

Battery life sells more laptops than cinebench scores. 

Link to comment
Share on other sites

Link to post
Share on other sites

So when do you guys think they will release it? Late August, September?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, SpaceGhostC2C said:

Because that covers like 2 models that get more reviews than sales. 

The vast  majority of laptop OEMs will end up putting those inside paper-thin devices, so you may as well just report the actual clicks they're going to run at instead of nominally inflating it only to have it throttle everywhere. 

Battery life sells more laptops than cinebench scores. 

There are high powered devices with good battery life like the AW 15/17 that have good enough cooling systems to support high powered cpus (7820hk) and after a repaste you can have it running at 4.4ghz with no throttling. The AW 15 with a configuration of the 7820hk and 1070 can have a battery life of 7 hours as long as it doesn't have g-sync.

So I don't really see the reasoning behind those criticisms. Also, I don't only use my laptop for only gaming, but for both gpu and cpu intensive tasks like rendering and 3d modeling, and that is why I totally disagree with intel's direction in the mobile space.

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, TheDankKoosh said:

There are high powered devices with good battery life like the AW 15/17 that have good enough cooling systems to support high powered cpus (7820hk)

And they are niche products.

28 minutes ago, TheDankKoosh said:

So I don't really see the reasoning behind those criticisms. Also, I don't only use my laptop for only gaming, but for both gpu and cpu intensive tasks like rendering and 3d modeling, and that is why I totally disagree with intel's direction in the mobile space.

Exactly, that is the thing: you disagree with these mobile processors because you extrapolate your own use case. Intel, however, doesn't make CPUs for you, but for the whole market. Consumers like you exist, but you are not representative of the average laptop sold...

However, I also think you are assuming too much: just because we have rumors about low clock mobile CPUs doesn't mean that the usual HQ versions won't exist as well. But clearly what matters to dominate the laptop market is to make the CPUs that OEMs are going to put in all the Macbook wannabes.

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, TheDankKoosh said:

There are high powered devices with good battery life like the AW 15/17 that have good enough cooling systems to support high powered cpus (7820hk) and after a repaste you can have it running at 4.4ghz with no throttling. The AW 15 with a configuration of the 7820hk and 1070 can have a battery life of 7 hours as long as it doesn't have g-sync.

So I don't really see the reasoning behind those criticisms. Also, I don't only use my laptop for only gaming, but for both gpu and cpu intensive tasks like rendering and 3d modeling, and that is why I totally disagree with intel's direction in the mobile space.

Their direction is perfectly fine. Yes, the base clock is lower, but the advanced speedshift from Kaby Lake allows for them to better control power efficiency. They can boost to their max clock speeds in an instant, and with the way it's now hardware-integrated into Windows, it operates better than your standard boost tables. Whichever core is fastest at any given time, is the core that is utilized for your single threaded applications. When more than one core is being utilized, the max multi-core boost from your boost table is used. If absolutely every thread is being utilized, all of those threads are being boosted (assuming power/thermal limitations are being met).

 

Also, I've been working with laptops for quite some time, and I highly doubt your 4720HQ is just as fast as a 7700HQ. Completely ignoring the 7%ish IPC difference, we are talking completely different turbo boost efficiency. 

 

As long as Intel sticks with this instantaneous speedshift tech for mobile CPU's, it's the best way to handle the power efficiency problem. You claim that the AW's achieve 7 hours of battery life with a 1070 (without G-Sync)? You understand that is because of Optimus using the iGPU over the dGPU, right? You also understand the processors various power states (and again, Kaby's speedshift) take care of that entirely, right? Change your voltage from adaptive to static, and apply a static clock speed and I promise you that efficiency goes out the window.

 

If you want something to complain about in the mobile world, go complain to OEM's about their terrible BIOS practices. Artificially locking out features with hardware whitelists, forcing G-Sync to not work on hardware (both panel and GPU) that is perfectly rated for it, just because the BIOS doesn't match up? That's where the real nonsense lies. If not for the work of Premamod and Svet, the mobile enthusiast world would be on it's head entirely.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, MageTank said:

Their direction is perfectly fine. Yes, the base clock is lower, but the advanced speedshift from Kaby Lake allows for them to better control power efficiency. They can boost to their max clock speeds in an instant, and with the way it's now hardware-integrated into Windows, it operates better than your standard boost tables. Whichever core is fastest at any given time, is the core that is utilized for your single threaded applications. When more than one core is being utilized, the max multi-core boost from your boost table is used. If absolutely every thread is being utilized, all of those threads are being boosted (assuming power/thermal limitations are being met).

 

Also, I've been working with laptops for quite some time, and I highly doubt your 4720HQ is just as fast as a 7700HQ. Completely ignoring the 7%ish IPC difference, we are talking completely different turbo boost efficiency. 

 

As long as Intel sticks with this instantaneous speedshift tech for mobile CPU's, it's the best way to handle the power efficiency problem. You claim that the AW's achieve 7 hours of battery life with a 1070 (without G-Sync)? You understand that is because of Optimus using the iGPU over the dGPU, right? You also understand the processors various power states (and again, Kaby's speedshift) take care of that entirely, right? Change your voltage from adaptive to static, and apply a static clock speed and I promise you that efficiency goes out the window.

 

If you want something to complain about in the mobile world, go complain to OEM's about their terrible BIOS practices. Artificially locking out features with hardware whitelists, forcing G-Sync to not work on hardware (both panel and GPU) that is perfectly rated for it, just because the BIOS doesn't match up? That's where the real nonsense lies. If not for the work of Premamod and Svet, the mobile enthusiast world would be on it's head entirely.

The base clock of 2 ghz is intel's way of saying "more cores less clock speed" is the way to go and I really don't agree with that since enough tasks don't scale with slower cores correctly. With that base clock in mind you have to then think about what the all core boost speed will be compared to what already exists, so if this doesn't have the boost of at least a 6700hq (3.1 all core) then it should be dismissed as trash, no amount of efficiency will make up for that lack of performance no matter what.

 

Another thing, my 4720hq is a little faster than a 7700hq even after accounting for the small ipc difference. All it took was an undervolt 3.5 ghz turbo bin and then a 105 reference clock. While it isn't as efficient as the 7700hq at least I have the option to raise the turbo speed, reference clocks, and even downgrading microcode if I want to get more life out of the machine. Skylake/Kaby (X700hq specifically)  don't even let you touch turbo so that makes the new "8700hq" pretty shitty considering the low as hell base clock.

 

Lastly, I don't care about OEM's limiting g-sync compatibility because if I want a G-sync notebook then I'll just buy it, what I care about is them not holding back what the hardware is capable of through their own means (razer and apple power throttling comes to mind). I know that many still do that, but that is why I don't buy their hardware. The reason I'm on a G751JY is because it is a non power or thermal throttled machine that will just work, but I wish nvidia would have allowed more oc control with maxwell mobile so that I wouldn't have to go through the trouble of fixing it myself. Same with intel, they just want you to have a machine that is constantly declining without any way to give it that extra year or so of life. Don't get me wrong though, I love what the custom bios makers and beyond do for us as a community, I just wish it wasn't needed in the first place.

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, MageTank said:

Considering Intel artificially crippled the AVX performance of the 6-8c Skylake-X CPU's, I highly doubt that. If you look at the reviews of the 6-8c Skylake-X CPU's, their AVX performance makes zero sense relative to the rest of the lineup. It's clear Intel tried to differentiate the product stack with artificial limitations. 

It was pointed out over at overclockers through testing that the 6-8 core do in fact support full AVX512 throughput. @done12many2 & @TahoeDust can confirm this.

 

Spoiler

Alright. It looks like the 7800X has been confirmed to have the full-throughput AVX512. So presumably, the 7820X will have it too:

Source: http://www.pcgameshardware.de/Skylake-X-Codename-266252/News/Core-i7-7800X-AVX512-Durchsatz-1232713/

This goes against all the pre-launch reviews about the 7800X and 7820X not having the full AVX512.

 

__________

 

Correct. The guy confirmed with me that it was running at 3.5 GHz under AVX512.

The benchmark shows: 677.76 GFlops

Theoretical limit (half-throughput AVX512): (1 FMA/cycle) * (2 Flops/FMA) * (8 DP/inst for AVX512) * (6 cores) * (3.5 GHz) = 336 GFlops
Theoretical limit (full-throughput AVX512): (2 FMA/cycle) * (2 Flops/FMA) * (8 DP/inst for AVX512) * (6 cores) * (3.5 GHz) = 672 GFlops

 

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, TheDankKoosh said:

The base clock of 2 ghz is intel's way of saying "more cores less clock speed" is the way to go and I really don't agree with that since enough tasks don't scale with slower cores correctly. With that base clock in mind you have to then think about what the all core boost speed will be compared to what already exists, so if this doesn't have the boost of at least a 6700hq (3.1 all core) then it should be dismissed as trash, no amount of efficiency will make up for that lack of performance no matter what.

 

Another thing, my 4720hq is a little faster than a 7700hq even after accounting for the small ipc difference. All it took was an undervolt 3.5 ghz turbo bin and then a 105 reference clock. While it isn't as efficient as the 7700hq at least I have the option to raise the turbo speed, reference clocks, and even downgrading microcode if I want to get more life out of the machine. Skylake/Kaby (X700hq specifically)  don't even let you touch turbo so that makes the new "8700hq" pretty shitty considering the low as hell base clock.

 

A lower base clock won't impact performance at all as long as the boost clock can hold up under power/thermal demands. That's essentially why they are pushing Speedshift 2.0 as hard as they are. It's the same on X299, a non-mobile platform.

 

I still don't see how a 35 multiplier with a 105 baseclock (which, btw, is nearly impossible to maintain on haswell with how heavily PCIe and the rest of your SA is integrated to the BCLK, but let's roll with it) trumps the 3400mhz all-core boost of the 7700HQ given the insane strides in boost efficiency Skylake and Kaby made relative to Haswell. Anyone that has actually used Haswell Mobile and Kaby Mobile will attest they are worlds apart in responsiveness. Not only that, but some OEM's actually allow you to run the 7700HQ at an all core turbo of 3.8ghz with a pre-configured power limit. It's entirely dependent on how they choose to implement the advanced turbo features. You can even do this on desktop CPU's (such as the 6600T that I used previously). I am pretty sure a 7700HQ at 3.8ghz > a 4720HQ at 3.675, 

 

43 minutes ago, TheDankKoosh said:

Lastly, I don't care about OEM's limiting g-sync compatibility because if I want a G-sync notebook then I'll just buy it, what I care about is them not holding back what the hardware is capable of through their own means (razer and apple power throttling comes to mind). I know that many still do that, but that is why I don't buy their hardware. The reason I'm on a G751JY is because it is a non power or thermal throttled machine that will just work, but I wish nvidia would have allowed more oc control with maxwell mobile so that I wouldn't have to go through the trouble of fixing it myself. Same with intel, they just want you to have a machine that is constantly declining without any way to give it that extra year or so of life. Don't get me wrong though, I love what the custom bios makers and beyond do for us as a community, I just wish it wasn't needed in the first place.

You are contradicting yourself. That's not the point. You can buy a notebook that has a G-Sync compatible GPU and G-Sync compatible panel inside if it. Let's say you are using AOU's 15.6 inch 1080p 60hz G-Sync panel, and you want to upgrade to AUO's 15.6 inch 1440p 120hz G-Sync panel. The panel itself will work fine, but G-Sync will be disabled because the panel is not whitelisted in Nvidia's driver, or your vBIOS. This is hardware that you paid for, able to use G-Sync, but can't, because they decided you don't have the right to upgrade your panel as you see fit, and retain a feature you potentially want to use. It's no different than using an arbitrary power limit (the thing you are annoyed with razer and Apple for doing, that literally every OEM, Dell included, does). It's controlling what you can and can't do with the hardware you paid for. 

 

Intel going with a lower base clock means absolutely nothing. It's still going to perform just as well once it's boosted. Blame the vendors for using crippled power limits and subpar cooling solutions in conjunction with these CPU's. You want an overclockable laptop? Buy an overclockable laptop. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@TheDankKoosh What Mage is saying is that on Kabylake, Skylake X, and future Intel CPUs, is that the base clock will be a compromise. Lower performance in exchange for seriously lower power draw and thermals, when high performance is not needed.

Then there is the boost clock table, with a high all core that gives high multicore and single core performance when needed, and if thermal and power headroom allows for it, as well as higher multicore and single core boosts that up the single core performance with each step up.

It's a method of power saving and lowering thermals. Crucial on ULV processors, very nice to have on Kaby, and necessary for Sky X on underpowered (in terms of cooling) configs.

 

If Intel decided to make a highly power efficient version of the 7700K, by reducing the base clock to 2.1GHz and make the boost table go up to that of the 7700K, it'd still be the same chip giving the same performance under the same heavy loads, and only idles lower.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, MageTank said:

A lower base clock won't impact performance at all as long as the boost clock can hold up under power/thermal demands. That's essentially why they are pushing Speedshift 2.0 as hard as they are. It's the same on X299, a non-mobile platform.

 

I still don't see how a 35 multiplier with a 105 baseclock (which, btw, is nearly impossible to maintain on haswell with how heavily PCIe and the rest of your SA is integrated to the BCLK, but let's roll with it) trumps the 3400mhz all-core boost of the 7700HQ given the insane strides in boost efficiency Skylake and Kaby made relative to Haswell. Anyone that has actually used Haswell Mobile and Kaby Mobile will attest they are worlds apart in responsiveness. Not only that, but some OEM's actually allow you to run the 7700HQ at an all core turbo of 3.8ghz with a pre-configured power limit. It's entirely dependent on how they choose to implement the advanced turbo features. You can even do this on desktop CPU's (such as the 6600T that I used previously). I am pretty sure a 7700HQ at 3.8ghz > a 4720HQ at 3.675, 

 

Intel going with a lower base clock means absolutely nothing. It's still going to perform just as well once it's boosted. Blame the vendors for using crippled power limits and subpar cooling solutions in conjunction with these CPU's. You want an overclockable laptop? Buy an overclockable laptop. 

First, the 8700hq will be worse at holding up power and thermal limits since intel has done next to nothing on 14nm to truly improve efficiency for a full load other than add a few hundred mhz on the same cpu since skylake came out, which will be entirely negated because I guarantee that the 8700hq won't even have the all core max of 3.1 that the 6700hq did have. Speedshift is for switching from low to high loads quickly, and I can deal with a 50ms delay for my 4720hq to ramp up to full speed and stay there.

 

Secondly, I have used both platforms and I don't find a tangible real world difference in either of them, this might be attributed to me being a power user, with most others not being power users like me. 

 

Third, what I'm trying to say is that the low base is most certainly indicative of a low boost as well, and that isn't good since I highly doubt that they have made any tangible ipc gain. Speedshift cannot make clockspeed come out of thin air, therefore if there is no ipc gain then there is actually a decrease in performance except through the programs that use extra cores and threads. 

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Drak3 said:

@TheDankKoosh What Mage is saying is that on Kabylake, Skylake X, and future Intel CPUs, is that the base clock will be a compromise. Lower performance in exchange for seriously lower power draw and thermals, when high performance is not needed.

Then there is the boost clock table, with a high all core that gives high multicore and single core performance when needed, and if thermal and power headroom allows for it, as well as higher multicore and single core boosts that up the single core performance with each step up.

It's a method of power saving and lowering thermals. Crucial on ULV processors, very nice to have on Kaby, and necessary for Sky X on underpowered (in terms of cooling) configs.

 

If Intel decided to make a highly power efficient version of the 7700K, by reducing the base clock to 2.1GHz and make the boost table go up to that of the 7700K, it'd still be the same chip giving the same performance under the same heavy loads, and only idles lower.

I understand what speedstep is for, but considering that the 7700hq still has a fairly high base of 2.8 ghz what is the actual point of bringing the clockspeed down that far if the chips are the same tdp? This makes it seem like they are drastically lowering boost speeds as well.

8086k Winner BABY!!

 

Main rig

CPU: R7 5800x3d (-25 all core CO 102 bclk)

Board: Gigabyte B550 AD UC

Cooler: Corsair H150i AIO

Ram: 32gb HP V10 RGB 3200 C14 (3733 C14) tuned subs

GPU: EVGA XC3 RTX 3080 (+120 core +950 mem 90% PL)

Case: Thermaltake H570 TG Snow Edition

PSU: Fractal ION Plus 760w Platinum  

SSD: 1tb Teamgroup MP34  2tb Mushkin Pilot-E

Monitors: 32" Samsung Odyssey G7 (1440p 240hz), Some FHD Acer 24" VA

 

GFs System

CPU: E5 1660v3 (4.3ghz 1.2v)

Mobo: Gigabyte x99 UD3P

Cooler: Corsair H100i AIO

Ram: 32gb Crucial Ballistix 3600 C16 (3000 C14)

GPU: EVGA RTX 2060 Super 

Case: Phanteks P400A Mesh

PSU: Seasonic Focus Plus Gold 650w

SSD: Kingston NV1 2tb

Monitors: 27" Viotek GFT27DB (1440p 144hz), Some 24" BENQ 1080p IPS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×