Jump to content

7900X reviewed!

PCGuy_5960

On the X299 power use/heat issues, one point that I keep finding in reviews is the implementation of the Turbo Boost 3.0 functions before some finalizations from Intel, which seemed to hit the production-level bios going into the weekend.

 

Power Use might simply be better in a week or two after certain details are sorted out in the bios. So that's something to be mindful of.  But, since a class of the forum members want high all-core turbos, enjoy the custom water loops!

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Taf the Ghost said:

On the X299 power use/heat issues, one point that I keep finding in reviews is the implementation of the Turbo Boost 3.0 functions before some finalizations from Intel, which seemed to hit the production-level bios going into the weekend.

 

Power Use might simply be better in a week or two after certain details are sorted out in the bios. So that's something to be mindful of.  But, since a class of the forum members want high all-core turbos, enjoy the custom water loops!

A BIOs update is not going to fix the TIM

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, The Benjamins said:

A BIOs update is not going to fix the TIM

It's not going to change issues above 4.5 Ghz all-core, but some of the systems are pretty much starting at a large base clock OC rather than at actual "stock". So their baseline power usage might actually drop for normal users in a few bios revisions simply because the system is automatically all-core OC'ing when it really shouldn't be.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Taf the Ghost said:

It's not going to change issues above 4.5 Ghz all-core, but some of the systems are pretty much starting at a large base clock OC rather than at actual "stock". So their baseline power usage might actually drop for normal users in a few bios revisions simply because the system is automatically all-core OC'ing when it really shouldn't be.

Which is interesting since how many of the reviews where they label the results as Stock are actually truly Stock or not.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

Which is interesting since how many of the reviews where they label the results as Stock are actually truly Stock or not.

that can be very deceptive, this better not be something done on purpose.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, The Benjamins said:

that can be very deceptive, this better not be something done on purpose.

 

6 minutes ago, leadeater said:

Which is interesting since how many of the reviews where they label the results as Stock are actually truly Stock or not.

 

Honestly for some I think it's just an error since they were in such a rush to get reviews out.
Linus already had to sort out their Price to Performance graph as well.

I do think the EU reviews will be significantly more in depth and better, since none of them have received anything from Intel. So when they do get CPUs and motherboards they'll have the time to do things thoroughly.

I'm really looking forward to Computerbase.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Which is interesting since how many of the reviews where they label the results as Stock are actually truly Stock or not.

OC3D specifically mentioned that one of the boards started itself out at 4.0 Ghz all-core on the 8c i7-7820X. Stock is 3.6 Ghz on that.  (He also mentioned he got a bootable 5 Ghz on his dual rad water cooling, but it wasn't completely stable in all programs.)  These really are, at least through the i7-7900X, Skylake cores with a changed cache system.  That means they get the same potential clocks, but you're getting the same level of power draw.

 

Though I've also noticed that the review guides are pointing them to a few games that might respond to the cache system better. DX12 testing being pushed... hmm.  Also, IPC isn't actually Uplifted in Skylake-X. It's a sideways upgrade at best, though for server-like stuff which likes the larger L2 cache, it definitely performs better.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Taf the Ghost said:

OC3D specifically mentioned that one of the boards started itself out at 4.0 Ghz all-core on the 8c i7-7820X. Stock is 3.6 Ghz on that.  (He also mentioned he got a bootable 5 Ghz on his dual rad water cooling, but it wasn't completely stable in all programs.)  These really are, at least through the i7-7900X, Skylake cores with a changed cache system.  That means they get the same potential clocks, but you're getting the same level of power draw.

 

Though I've also noticed that the review guides are pointing them to a few games that might respond to the cache system better. DX12 testing being pushed... hmm.  Also, IPC isn't actually Uplifted in Skylake-X. It's a sideways upgrade at best, though for server-like stuff which likes the larger L2 cache, it definitely performs better.

Or in some cases for gaming, the IPC is much lower due to the Mesh Caching. So it has higher Clocks, but performs worse than Broadwell-E. Where as in normal CPU workloads, that isn't an issue, due to the cache and Inter-core latency.


PCPer

https://www.pcper.com/reviews/Processors/Intel-Core-i9-7900X-10-core-Skylake-X-Processor-Review/1080p-Gaming-Performance-a

Quote

First, the 7900X is never faster than the 6950X in the gaming tests we ran, although it is able to match performance in Ashes, Deus Ex: Human Revolution, Ghost Recon Wildlands, and GTA V (mostly). However, take a look at Civ 6, Far Cry Primal, or even Hitman and Rise of the Tomb Raider: these show the new 7900X as slower than the 6950X in gaming, even with its substantial clock speed advantages. How? Remember back to this graph I showed you on thread-to-thread communication latency?

 

 

 

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Taf the Ghost said:

OC3D specifically mentioned that one of the boards started itself out at 4.0 Ghz all-core on the 8c i7-7820X. Stock is 3.6 Ghz on that.

Yea I watched that video which is why it came to mind when the subject was brought up. If results have been skewed then the 7900X might not be as good at stock as we may have been shown but on the flip side overclock results are better so... keep waiting and see how things go I guess.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Valentyn said:

Or in some cases for gaming, the IPC is much lower due to the Mesh Caching. So it has higher Clocks, but performs worse than Broadwell-E. Where as in normal CPU workloads, that isn't an issue, due to the cache and Inter-core latency.


PCPer

https://www.pcper.com/reviews/Processors/Intel-Core-i9-7900X-10-core-Skylake-X-Processor-Review/1080p-Gaming-Performance-a

 

 

 

 

Yup, saw that.  Though I think it's a little less about core to core latency and more about the way some game engines clearly trash through the L3 cache to get around other issues. This is the reason Ryzen saw some massive uplift in a few games after optimization.  Some part of the way the Ring Bus works isn't very friendly to other schemes.

 

It's something that's been noticeable since Ryzen dropped. In all other ways areas, the performance uplift from Sandy Bridge is actually pretty massive, but in "average" over gaming benchmarking at 1080p it's quite small.  The biggest issue is the way modern game engines are built that uses some part of the Ring Bus to break through a bottleneck. I don't know the technicals enough to sort out the "where", but it's pretty clear from all of the charting benchmark charts.  When games aren't responding to IPC or Clock speed uplifts in a consistent manner, there's something else going on.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

Yea I watched that video which is why it came to mind when the subject was brought up. If results have been skewed then the 7900X might not be as good at stock as we may have been shown but on the flip side overclock results are better so... keep waiting and see how things go I guess.

It might also explain some a few outlier Cinebench R15 results. (I'm not sure Cinebench is a great benchmarking tool, but it's nicely consistent across passes & CPUs, so you can notice trends.)  But the main thing is, well, the BIOS were getting updated through the weekend.  X299 probably was supposed to come out in the window Coffeelake is going to end up filling, so we're getting some of the rushed aspects still.  Not as bad as Ryzen, but still a tad early.

 

Core Latency is going to be a really big issue when Threadripper and the i9-7920X drop (the 12c part). 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, tom_w141 said:

This what I was trying to get at and is the reason the average HEDT user shouldn't go for X299. @done12many2 will actually use the features like AVX 512 but 99% of people won't therefore as I said in a previous post its a pointless bragging right that nets them no gains. Lets be honest here the majority of you are going to game and stream/content create, so when I show you how underwhelming the 7820X is for gaming (5-15% gain at 1080p which is a resolution enthusiasts don't play at) it is to prevent you wasting your money.

 

I'll stop talking about AVX when common software stops using it.  AVX is used quite a bit more then the average person thinks.  

 

Skylake-X is HEDT.  This platform is going to be used by consumers and prosumers alike.  With that said, your exaggerated "99% of people won't" is way off.  As I've done before and I'll do again, here are a few of things that use AVX.  They might be relevant to more then the 1% you claim.

 

Adobe Photoshop

Adobe Premier

Adobe Lightroom

Adobe After Effects

Microsoft Excel

WinRAR

7 Zip

x265 video encoding

x264 video encoding (not as much as x265)

 

Quote

Linus already has a video showing how great the R7 is at the above use case.

 

Linus also showed us how to fuck up a delid so not sure of your point?  I'm not sure that too many people outside of the less experienced place him into any type of final say so category.

 

JayzTwoCents really liked his R7 content creation setup.  Out of curiosity, he compared video editing with a 5960x (2+ year old Intel 8 core) and a Ryzen 1800X both clocked at 4 GHz.  Content creation is one of Ryzen's strong points and for the money, it did pretty well.

 

End result, at the same clock speed, the 5960x was roughly 9% faster.  When he pushed the Intel chip to 4.5 GHz, that rose to roughly 15% faster in the video workload he was using.  Obviously, he would have pushed the Ryzen 1800X to 4.5 GHz as well, but he didn't have that option.

 

I know, I know.  But the 5960x cost more money.  Yes it did, but people who bought it 2.5 years ago either needed or wanted it.  With that said, onc could argue that they've seen some pretty good return on their money considering that they had the opportunity to enjoy that kind of performance roughly 2 years ahead of Ryzen R7 owners.  

 

Value is relative.  

 

 

 

 

Quote

Its your money guys but the performance is meh (for the majority use case), the cost is high (triple R7 1700 platform), power draw is insane and heat is high (inb4 "I'm delidding it ofc noob" I'm saying you shouldn't have to delid stop defending Intel for god sake and just demand better quality).

 

Bud, the thing you contnue to miss is that without value factored in, the performance of a 7900X or 7820X compared to a R7 chip, especially when overclocked is far from "meh".  The Intel versions simply beat the AMD versions every time.  That's why Intel continues to suck the money out of the people that know.  

 

Everybody here already understands that Intel's chips are a terrible value.  You know we all know that.  The thing that you're trying to do is get everyone to believe a fairy tale that you can buy the cheaper part and still outperform the more expensive part.  You can't.  With that said, we know, it's not a great value.  

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, done12many2 said:

Adobe Photoshop

Adobe Premier

Adobe Lightroom

Adobe After Effects

Microsoft Excel

WinRAR

7 Zip

x265 video encoding

x264 video encoding (not as much as x265)

And CAD ;)

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, done12many2 said:

 

I'll stop talking about AVX when common software stops using it.  AVX is used quite a bit more then the average person thinks.  

 

Skylake-X is HEDT.  This platform is going to be used by consumers and prosumers alike.  With that said, your exaggerated "99% of people won't" is way off.  As I've done before and I'll do again, here are a few of things that use AVX.  They might be relevant to more then the 1% you claim.

 

Adobe Photoshop

Adobe Premier

Adobe Lightroom

Adobe After Effects

Microsoft Excel

WinRAR

7 Zip

x265 video encoding

x264 video encoding (not as much as x265)

 

 

Linus also showed us how to fuck up a delid so not sure of your point?  I'm not sure that too many people outside of the less experienced place him into any type of final say so category.

 

JayzTwoCents really liked his R7 content creation setup.  Out of curiosity, he compared video editing with a 5960x (2+ year old Intel 8 core) and a Ryzen 1800X both clocked at 4 GHz. 

 

End result, at the same clock speed, the 5960x was roughly 9% faster.  When he pushed the Intel chip to 4.5 GHz, that rose to roughly 15% faster in the video workload he was using.  Obviously, he would have pushed the Ryzen 1800X to 4.5 as well, but he didn't have that option.

 

I know, I know.  But the 5960x cost more money.  Yes it did, but people who bought it 2.5 years ago either needed or wanted it.  With that said, once could argue that they've seen some pretty good return on their money considering that they had the opportunity to enjoy that kind of performance roughly 2 years ahead of Ryzen R7 owners.  

Value is relative.  

 

 

 

 

 

Bud, the thing you contnue to miss is that without value factored in, the performance of a 7900X or 7820X compared to a R7 chip, especially when overclocked is far from "meh".  The Intel versions simply beat the AMD versions every time.  That's why Intel continues to suck the money out of the people that know.  

 

Everybody here already understands that Intel's chips are a terrible value.  You know we all know that.  The thing that you're trying to do is get everyone to believe a fairy tale that you can buy the cheaper part and still outperform the more expensive part.  You can't.  With that said, we know, it's not a great value.  

 

Yes they use AVX but Ryzen supports AVX just not AVX512.

 

The $999 7900X competes against the $849 16 core Threadripper not the $280 R7 so not sure why that's relevant. Those gains are meh for gaming which is certainly what a lot of HEDT users still do even though the mainstream 7700k would do it better. for it to be above meh personally performance uplift would have to be >30% minimum which it just isn't. 10-20% for that price is madness (but I'm not going to focus on that much here as value is subjective. Btw please quote anywhere I have said the R7 will outperform the more expensive part and I will correct it immediately.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, tom_w141 said:

Yes they use AVX but Ryzen "supports" AVX just not AVX512.

THIS IS RYZEN'S TRUE AVX PERFORMANCE:

Z7KyLnafEy4xEfApq5WGAP-650-80.png

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, PCGuy_5960 said:

And CAD ;)

 

Oh, there's tons more, but I didn't see the need as the argument is silly.  

 

I don't need it, therefore nobody else does, right?

 

5 minutes ago, tom_w141 said:

Yes they use AVX but Ryzen supports AVX just not AVX512.

 

Yes, but AVX and AVX2 are intentionally crippled in Ryzen.  That's a fact.  Then add the lack of AVX-512, which at this point is no big deal.  

 

11 minutes ago, done12many2 said:

Bud, the thing you contnue to miss is that without value factored in, the performance of a 7900X or 7820X compared to a R7 chip, especially when overclocked is far from "meh".  The Intel versions simply beat the AMD versions every time.  That's why Intel continues to suck the money out of the people that know.  

 

Everybody here already understands that Intel's chips are a terrible value.  You know we all know that.  The thing that you're trying to do is get everyone to believe a fairy tale that you can buy the cheaper part and still outperform the more expensive part.  You can't.  With that said, we know, it's not a great value.  

 

 

Quote

The $999 7900X competes against the $849 16 core Threadripper not the $280 R7 so not sure why that's relevant. Those gains are meh for gaming which is certainly what a lot of HEDT users still do even though the mainstream 7700k would do it better. for it to be above meh personally performance uplift would have to be >30% minimum which it just isn't. 10-20% for that price is madness (but I'm not going to focus on that much here as value is subjective. Btw please quote anywhere I have said the R7 will outperform the more expensive part and I will correct it immediately.

 

Damn man, I even BOLDED and enlarged the "without value factored in" and you still based your argument on value.  This is going beyond irrational. 

 

Which chip competes against another based on price is YOUR way of thinking.  Not everyone's.  

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, done12many2 said:

 

Oh, there's tons more, but I didn't see the need as the argument is silly.  

 

I don't need it, therefore nobody else does, right?

 

 

Yes, but AVX and AVX2 are intentionally crippled in Ryzen.  That's a fact.  Then add the lack of AVX-512, which at this point is no big deal.  

 

 

 

Damn man, I even BOLDED and enlarged the "without value factored in" and you still based your argument on value.  This is going beyond irrational. 

 

Which chip competes against another based on price is YOUR way of thinking.  Not everyone's.  

 

 

Fine base it on cores if it suits your argument better. Why are you comparing a 10 core to an 8 core? There is literally no reason that these CPUs should compete.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, tom_w141 said:

1 graph tells no story.

 

Here is plenty

That graph shows pure AVX performance with nothing else taken into account. So that's how Ryzen truly performs when using AVX (and just AVX)

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, done12many2 said:

Bud, the thing you contnue to miss is that without value factored in, the performance of a 7900X or 7820X compared to a R7 chip, especially when overclocked is far from "meh".  The Intel versions simply beat the AMD versions every time.  That's why Intel continues to suck the money out of the people that know.  

 

Everybody here already understands that Intel's chips are a terrible value.  You know we all know that.  The thing that you're trying to do is get everyone to believe a fairy tale that you can buy the cheaper part and still outperform the more expensive part.  You can't.  With that said, we know, it's not a great value.  

 

 

6 minutes ago, tom_w141 said:

Fine base it on cores if it suits your argument better. Why are you comparing a 10 core to an 8 core?

 

Please go back and read.

 

Man you really are trying too hard lately.  The Tom I first started debating with was on his "A" game, but somebody else is logged into your account right now.

 

Either one, 8 core or 10 core will outperform a R7.  

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, tom_w141 said:

Why are you comparing a 10 core to an 8 core?

Alright then, do you want to compare Cinebench results? @done12many2 post your 5960X at 4.9GHz score and @tom_w141post your 1700 score....  And run y cruncher (AVX intensive)

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, done12many2 said:

 

 

Please go back and read.

 

Man you really are trying too hard lately.  The Tom I first started debating with was on his "A" game, but somebody else is logged into your account right now.

 

Either one, 8 core or 10 core will outperform a R7.  

I know you said or :) I just can't fathom why a 10 core should even weigh in here. 8 vs 8 I get it. 10 vs 8 I don't.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, tom_w141 said:

I know you said or :) I just can't fathom why a 10 core should even weigh in here. 8 vs 8 I get it. 10 vs 8 I don't.

 

So how about we just forget the whole Ryzen vs Skylake-X thing because they weren't even meant to compete with one another?  

 

When Threadripper is here, we'll get back to arguing.  :D

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×