Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Puffing

Help me make a 9900k rig

Recommended Posts

Posted · Original PosterOP
9 minutes ago, jerubedo said:

For Premiere pro the 9600K beats the 7700K overall. Here's the data on the 7700K vs the 8600k. Note that the 9600K is slightly faster at stock and can generally OC about 100MHz higher than the 8600K:

 

pic_disp.jpeg.9055554cb19604cb8a77ccc6ebeec766.jpeg

I saw that the Ryzen 2700X was better is this false?

Link to post
Share on other sites
Posted · Original PosterOP
18 hours ago, Firewrath9 said:

1200$?

oof, what do you do? a 9700k or a 2700x might be a better fit (probably will)

M6 confusion starts with the benchmarks made by pudget systems and gamers nexus... pudget says that the Ryzen 2700X has an edge but GN says that the intel i7 processors have a huge advantage... this is where I need input.

Link to post
Share on other sites
3 hours ago, Puffing said:

M6 confusion starts with the benchmarks made by pudget systems and gamers nexus... pudget says that the Ryzen 2700X has an edge but GN says that the intel i7 processors have a huge advantage... this is where I need input.

for premier and adobe, intel is better due to single thread. 9700k-9900k is very similar, puget says that aswell, and theres intel quicksync too


Beta!  X570 VRM + Features list (PM me if you want to help) 

 

Will prob be making an X570 "tier" list when that is completed.

Some Custom Loop Questions, please help answer them.

 

PC (Main)

 

CPU: i5-8400 CPU Cooler: Cryorig M9 Plus   Motherboard: Gigabyte B360M DS3H | RAM: Crucial Ballistix Sport 2x8 DDR4-2400

 Boot/OS SSD: Inland 480GB SSD | Video Card: RX 570 4GB Strix OC | Case: Fractal Design Meshify C White TG (11/10) PSU: EVGA SuperNOVA G3 750

Monitor: Sceptre 24" 1080p 75hz Webcam: Logitech C920s

 

NAS:

Synology DS418J w/ 4x WD Red Pro 6TB RAID 10 (used 7.3/12TB)

 

Phone/Tablet:

iPhone XR 64GB iPad Mini 4 128GB

 

Laptops:

Dell XPS 15 9570 i7-8750H + 1050 ti + 20GB ram (16+4) + 1TB EX920 SSD

 

My old computers:

Athlon XP + 2GB DDR1 (!) + R9600 Pro 256MB Core 2 Quad Q8400 @ 3.4ghz + GTX 275 | i5-3570k @ 4.4ghz + GTX 

Link to post
Share on other sites

Latest Puget Premiere Pro cpu benchmarks, https://www.pugetsystems.com/labs/articles/Premiere-Pro-CC-2019-CPU-Roundup-Intel-vs-AMD-vs-Mac-1320/. It rates the 2700X and i7-9700K roughly equivalent.

 

This build is $100 over budget, (1200 + 200), and is a bit light on storage. But it does squeeze in an i9-9900K, RTX 2070, and 32GB of memory.

 

PCPartPicker Part List

CPU: Intel - Core i9-9900K 3.6 GHz 8-Core Processor  ($494.89 @ B&H) 
CPU Cooler: be quiet! - Dark Rock Pro 4 50.5 CFM CPU Cooler  ($88.09 @ Amazon) 
Motherboard: Gigabyte - Z390 AORUS PRO ATX LGA1151 Motherboard  ($169.98 @ Amazon) 
Memory: Corsair - Vengeance LPX 32 GB (2 x 16 GB) DDR4-3000 Memory  ($164.99 @ Newegg) 
Storage: Crucial - MX500 1 TB 2.5" Solid State Drive  ($119.99 @ Adorama) 
Video Card: Gigabyte - GeForce RTX 2070 8 GB WINDFORCE Video Card  ($454.99 @ Newegg) 
Case: be quiet! - Dark Base 700 ATX Mid Tower Case  (Purchased For $0.00) 
Power Supply: Corsair - RMx (2018) 850 W 80+ Gold Certified Fully Modular ATX Power Supply  (Purchased For $0.00) 
Total: $1492.93
Generated by PCPartPicker 2019-05-22 14:34 EDT-0400

 

 


80+ ratings certify electrical efficiency. Not quality.

 

Link to post
Share on other sites
4 hours ago, Puffing said:

I saw that the Ryzen 2700X was better is this false?

Yes, the 2700X is better for premiere pro by roughly 25%. As far as main stream processors go, the order they fall in for Premiere Pro is:

 

1) i9 9900K

2) R7 2700X

3) i7 9700K

4) R7 2700

5) R5 2600X

6) i7 8700K

7) R5 2600

8: i5 9600K

9) i7 8700

10) i5 8600K

11) i5 9400F

12) i5 8400

 

For gaming:

 

1) i9 9900K

2) i7 9700K

3) i7 8700K

4) i5 9600K

5) i5 8600K

6) i7 8700 

7) i5 9400F

8: R7 2700X

9) i5 8400

10) R7 2700

11) R5 2600X

12) R5 2600

 

So, now you just have to figure out what the split in your load is going to be and choose which one will best fit your overall needs. Will it be an 80/20 split on editing to gaming? In that case I'd say the 2700x (the 9900K is just too expensive for your budget, but it would be the best of both worlds). Will it be more a 50/50 split? Then maybe the 9700K. A 30/70 split? I'd say the 9600K then in order to fit in a better GPU.

Link to post
Share on other sites
11 hours ago, jerubedo said:

 

It tied in 0.1% lows: 61 vs 61. In 1% lows, yes the 9600K did 9 FPS worse: 72 FPS for the 9600K and 81 FPS for the 9700K. But it still performed second best in that regard, functionally tied with the 2700x for 1% lows  It's also WAY cheaper than the 9700K. It's by no means a bad experience or a bottleneck. The word bottleneck should only be used to describe a situation in which one part limits another part by a factor of 20% or more. 

 

As for watching a video, I just tried this on my 9400F rig and the lows for me in a fight went from a 1% low of 92 to a 1% low of 90. Averages went from 118 to 115. So no, not a huge difference. But also, why would you watch a video while gaming? 

 

that's a very skewed definition of a bottleneck, but i guess to each his own, for me it's a bottleneck that made me upgrade from a 8600k to a 9900k (it was mostly path of exile and monster hunter worlds, division is even worse), just don't want op to make the same mistakes i did, a new cpu that hits 100% load and chokes out of the box is no fun.

 

Also, some people do watch videos while gaming, not just me, twitch, youtube, netflix etc.

 


9900k 1.36v 5.2 83C 185w 1.24v 4.9 60C 135w 1.05v 4.5 90w 50C (doing some testing for hot days) all-2avx cinebench/blender temps. avx voltages in prime. ll D15 ll Z390 taichi ult 1.60 bios fixed LLC voltage gaps ll gskill 2x8gb cl16 ddr4000 bdie 1.42v ll 4x samsung 970 evo 256gb (asus hyper pcie m2 raid card) ll 2x samsung 860 evo 500gb raid 0 (<--i should've went with 1tb nvmemistake of the build) ll EVGA 2080 ti XC (duo fan skinny) 1995//7600 power limited 79C max, stock voltage (really bad ocer) ll Corsair graphite 780T ll EVGA G2 1300w ll Windows 10 Pro ll NEC PA272w (movie, work mon) 2k60 14bit lut ll Predator X27 4k144 hdr (rgb98)

Link to post
Share on other sites

double post edit


9900k 1.36v 5.2 83C 185w 1.24v 4.9 60C 135w 1.05v 4.5 90w 50C (doing some testing for hot days) all-2avx cinebench/blender temps. avx voltages in prime. ll D15 ll Z390 taichi ult 1.60 bios fixed LLC voltage gaps ll gskill 2x8gb cl16 ddr4000 bdie 1.42v ll 4x samsung 970 evo 256gb (asus hyper pcie m2 raid card) ll 2x samsung 860 evo 500gb raid 0 (<--i should've went with 1tb nvmemistake of the build) ll EVGA 2080 ti XC (duo fan skinny) 1995//7600 power limited 79C max, stock voltage (really bad ocer) ll Corsair graphite 780T ll EVGA G2 1300w ll Windows 10 Pro ll NEC PA272w (movie, work mon) 2k60 14bit lut ll Predator X27 4k144 hdr (rgb98)

Link to post
Share on other sites
Posted · Original PosterOP
20 minutes ago, jerubedo said:

Yes, the 2700X is better for premiere pro by roughly 25%. As far as main stream processors go, the order they fall in for Premiere Pro is:

 

1) i9 9900K

2) R7 2700X

3) i7 9700K

4) R7 2700

5) R5 2600X

6) i7 8700K

7) R5 2600

8: i5 9600K

9) i7 8700

10) i5 8600K

11) i5 9400F

12) i5 8400

 

For gaming:

 

1) i9 9900K

2) i7 9700K

3) i7 8700K

4) i5 9600K

5) i5 8600K

6) i7 8700 

7) i5 9400F

8: R7 2700X

9) i5 8400

10) R7 2700

11) R5 2600X

12) R5 2600

 

So, now you just have to figure out what the split in your load is going to be and choose which one will best fit your overall needs. Will it be an 80/20 split on editing to gaming? In that case I'd say the 2700x (the 9900K is just too expensive for your budget, but it would be the best of both worlds). Will it be more a 50/50 split? Then maybe the 9700K. A 30/70 split? I'd say the 9600K then in order to fit in a better GPU.

Then why did they just say intel was better? If this list is true it would be between the 2700X and the 9700k

Link to post
Share on other sites
30 minutes ago, xg32 said:

that's a very skewed definition of a bottleneck, but i guess to each his own, for me it's a bottleneck that made me upgrade from a 8600k to a 9900k (it was mostly path of exile and monster hunter worlds, division is even worse), just don't want op to make the same mistakes i did, a new cpu that hits 100% load and chokes out of the box is no fun.

 

Also, some people do watch videos while gaming, not just me, twitch, youtube, netflix etc.

The CPU usage is not a constant 100%. There are spikes to 100%, yes, but mostly it stays in the 80s and sometimes 90s. But so does the 9700K. As seen in that video, its highest spike was 94%. These all just point to a pretty poorly optimized engine. All of the CPUS in that video are providing a very similar experience. When there's dips, they ALL dip, and you can feel it on all of them. The 9700K just happens to dip a little less, but it still causes a hitch regardless. 

 

Also did you know that for hyperthreaded CPUs when they are at 55% load across all threads in a gaming load (65% for a productivity workload), the cores are actually operating at 100%? The reason for that is because of resource contention. Hyperthreading provides a maximum gain of ~30% in real world scenarios, but only in productivity loads. In gaming it's usually no better than about 10%. So when all threads are at 55%, that's the additional 10% performance per core (2 threads per core, so 2x5 = 10). When a hyperthreaded CPU shows 100% usage across all cores, it's not actually performing at 100% (if it were that would mean that the performance is doubled, which it never is).  In that scenario, the CPU is working at 55% efficiency (for gaming loads) or 65% efficiency (for productivity loads). So there's no difference between a 9900K at 55% overall load in gaming vs 100% overall load in gaming. In productivity there's no difference between a 65% overall load and a 100% overall load.

 

So, if your new 9900K is showing 55% across all loads, it's actually 100% on all cores. There will always be exceptions to that guideline. Some games may benefit 15% from hyperthreading, in which case it would be when all threads are at ~58%.

 

Why does the load show in this manner? Because Windows has no way to know at what efficiency a load is working at on the CPU that's multi-threaded, so it treats each thread as a core in that regard and measures the load that's put onto each thread. Therefore any software measuring CPU usage (which gets its data from Windows) will report load in that manner.

Link to post
Share on other sites
3 minutes ago, Puffing said:

Then why did they just say intel was better?

For which load? It is true that Intel is better overall. After all, the 9900K is at the top of both lists. For gaming it's obvious that Intel is the better choice. But for Premiere Pro, the 2700X is the second best as seen here:

 

pic_disp.thumb.jpg.7e2ee54c9699ba84da07aedabdcdf1df.jpg

Link to post
Share on other sites
Posted · Original PosterOP
Just now, jerubedo said:

For which load? It is true that Intel is better overall. After all, the 9900K is at the top of both lists. For gaming it's obvious that Intel is the better choice. But for Premiere Pro, the 2700X is the second best as seen here:

 

pic_disp.thumb.jpg.7e2ee54c9699ba84da07aedabdcdf1df.jpg

Hmm ok well thanks

Link to post
Share on other sites
4 hours ago, Puffing said:

M6 confusion starts with the benchmarks made by pudget systems and gamers nexus... pudget says that the Ryzen 2700X has an edge but GN says that the intel i7 processors have a huge advantage... this is where I need input. 

I assume you're talking about this article:

 

https://www.gamersnexus.net/guides/3310-adobe-premiere-benchmarks-rendering-8700k-gpu-vs-ryzen

 

If so, it's because he was using Quicksync. But there are issues with doing that as pointed out with Pudget:

 

"The second issue with hardware accelerated encoding is that it is not the same quality as using the normal "Software only" mode. In fact, in our testing we found that the target bitrate was not always being met when using hardware acceleration. And even when it was, the quality was still lower than "Software only" mode. The odd thing was that when the bitrate was being matched, the time to export was identical in both modes - so we only saw a performance gain when the final exported file was at a lower actual bitrate."

 

Source: https://www.pugetsystems.com/labs/articles/Video-H-264-Hardware-Acceleration-in-Adobe-Media-Encoder---Good-or-Bad-1211/

 

Pudget is really the go-to for information on Adobe products.

Link to post
Share on other sites
18 minutes ago, jerubedo said:

I assume you're talking about this article:

 

https://www.gamersnexus.net/guides/3310-adobe-premiere-benchmarks-rendering-8700k-gpu-vs-ryzen

 

If so, it's because he was using Quicksync. But there are issues with doing that as pointed out with Pudget:

 

"The second issue with hardware accelerated encoding is that it is not the same quality as using the normal "Software only" mode. In fact, in our testing we found that the target bitrate was not always being met when using hardware acceleration. And even when it was, the quality was still lower than "Software only" mode. The odd thing was that when the bitrate was being matched, the time to export was identical in both modes - so we only saw a performance gain when the final exported file was at a lower actual bitrate."

 

Source: https://www.pugetsystems.com/labs/articles/Video-H-264-Hardware-Acceleration-in-Adobe-Media-Encoder---Good-or-Bad-1211/

 

Pudget is really the go-to for information on Adobe products.

but in photoshop intel wins...


Beta!  X570 VRM + Features list (PM me if you want to help) 

 

Will prob be making an X570 "tier" list when that is completed.

Some Custom Loop Questions, please help answer them.

 

PC (Main)

 

CPU: i5-8400 CPU Cooler: Cryorig M9 Plus   Motherboard: Gigabyte B360M DS3H | RAM: Crucial Ballistix Sport 2x8 DDR4-2400

 Boot/OS SSD: Inland 480GB SSD | Video Card: RX 570 4GB Strix OC | Case: Fractal Design Meshify C White TG (11/10) PSU: EVGA SuperNOVA G3 750

Monitor: Sceptre 24" 1080p 75hz Webcam: Logitech C920s

 

NAS:

Synology DS418J w/ 4x WD Red Pro 6TB RAID 10 (used 7.3/12TB)

 

Phone/Tablet:

iPhone XR 64GB iPad Mini 4 128GB

 

Laptops:

Dell XPS 15 9570 i7-8750H + 1050 ti + 20GB ram (16+4) + 1TB EX920 SSD

 

My old computers:

Athlon XP + 2GB DDR1 (!) + R9600 Pro 256MB Core 2 Quad Q8400 @ 3.4ghz + GTX 275 | i5-3570k @ 4.4ghz + GTX 

Link to post
Share on other sites
1 minute ago, Firewrath9 said:

but in photoshop intel wins...

Yep, that's 100% true. I don't think he listed that as a use-case, though. He only said Premiere Pro. @Puffing, will you also be using other Adobe products? That will affect the recommendations. 

Link to post
Share on other sites
1 hour ago, jerubedo said:

The CPU usage is not a constant 100%. There are spikes to 100%, yes, but mostly it stays in the 80s and sometimes 90s. But so does the 9700K. As seen in that video, its highest spike was 94%. These all just point to a pretty poorly optimized engine. All of the CPUS in that video are providing a very similar experience. When there's dips, they ALL dip, and you can feel it on all of them. The 9700K just happens to dip a little less, but it still causes a hitch regardless. 

 

Also did you know that for hyperthreaded CPUs when they are at 55% load across all threads in a gaming load (65% for a productivity workload), the cores are actually operating at 100%? The reason for that is because of resource contention. Hyperthreading provides a maximum gain of ~30% in real world scenarios, but only in productivity loads. In gaming it's usually no better than about 10%. So when all threads are at 55%, that's the additional 10% performance per core (2 threads per core, so 2x5 = 10). When a hyperthreaded CPU shows 100% usage across all cores, it's not actually performing at 100% (if it were that would mean that the performance is doubled, which it never is).  In that scenario, the CPU is working at 55% efficiency (for gaming loads) or 65% efficiency (for productivity loads). So there's no difference between a 9900K at 55% overall load in gaming vs 100% overall load in gaming. In productivity there's no difference between a 65% overall load and a 100% overall load.

 

So, if your new 9900K is showing 55% across all loads, it's actually 100% on all cores. There will always be exceptions to that guideline. Some games may benefit 15% from hyperthreading, in which case it would be when all threads are at ~58%.

 

Why does the load show in this manner? Because Windows has no way to know at what efficiency a load is working at on the CPU that's multi-threaded, so it treats each thread as a core in that regard and measures the load that's put onto each thread. Therefore any software measuring CPU usage (which gets its data from Windows) will report load in that manner.

yes i do that the 9900k rarely shows full load when it actually is on a few cores, which is why i suggested even a 9900k showed bottlenecking in division 2  (there are multiple techtubers that observed this when testing the 9900k with a 2080 ti in multiple games) , if it shows 100% at all it's a bottleneck on dips, doesn't have to be all the time, but it just seems like you are not willing to accept the fact that 6/6 bottlenecks some games and going on tangents, even having ur own unique definition of what bottlenecking is. Let's just agree to disagree.


9900k 1.36v 5.2 83C 185w 1.24v 4.9 60C 135w 1.05v 4.5 90w 50C (doing some testing for hot days) all-2avx cinebench/blender temps. avx voltages in prime. ll D15 ll Z390 taichi ult 1.60 bios fixed LLC voltage gaps ll gskill 2x8gb cl16 ddr4000 bdie 1.42v ll 4x samsung 970 evo 256gb (asus hyper pcie m2 raid card) ll 2x samsung 860 evo 500gb raid 0 (<--i should've went with 1tb nvmemistake of the build) ll EVGA 2080 ti XC (duo fan skinny) 1995//7600 power limited 79C max, stock voltage (really bad ocer) ll Corsair graphite 780T ll EVGA G2 1300w ll Windows 10 Pro ll NEC PA272w (movie, work mon) 2k60 14bit lut ll Predator X27 4k144 hdr (rgb98)

Link to post
Share on other sites
11 minutes ago, xg32 said:

yes i do that the 9900k rarely shows full load, but it just seems like you are not willing to accept the fact that 6/6 bottlenecks some games and going off on tangents. Let's just agree to disagree.

Nothing is a tangent, it's all relevant. The 9600K isn't a bottleneck, nor is a 2600x or any of the current CPUs. Are there better choices? Sure. The 9700K and 9900K and both better choices, but for a LOT more money. You're more happy with seeing the 9900K at a lower "usage", but the usage isn't nearly as low as you think, as I've explained. When it shows 55%, you're actually close to 100% and that's because of the hyperthreading. It's a false sense of security. In the video the 9600K performed about the same, period. When there were dips, they both dipped. 0.1% lows were exactly the same, so the worst of the dips were seen on both. The 9700K only managed to edge out on the 1% lows, which makes it better from a raw performance standpoint, sure, but for a ton more money. An 11% gain in a single metric in a handful of games for a 35% price increase.

 

Also just from personal experience, I don't see much of a difference from my 9400F rig and my 9700K rig on a 1080 Ti.

Link to post
Share on other sites
Just now, jerubedo said:

Nothing is a tangent, it's all relevant. The 9600K isn't a bottleneck, nor is a 2600x or any of the current CPUs. Are there better choices? Sure. The 9700K and 9900K and both better choices, but for a LOT more money. You're more happy with seeing the 9900K at a lower "usage", but the usage isn't nearly as low as you think, as I've explained. When it shows 55%, you're actually close to 100% and that's because of the hyperthreading. It's a false sense of security. In the video the 9600K performed about the same, period. When there were dips, they both dipped. 0.1% lows were exactly the same, so the worst of the dips were seen on both. The 9700K only managed to edge out on the 1% lows, which makes it better from a raw performance standpoint, sure, but for a ton more money. An 11% gain in a single metric in a handful of games for a 35% price increase.

you are literally changing the definition of bottlenecking then crossing it into a price:performance argument. The 9600k is a bottleneck in some games, so is the the 2600x, thats widely accepted by now.


9900k 1.36v 5.2 83C 185w 1.24v 4.9 60C 135w 1.05v 4.5 90w 50C (doing some testing for hot days) all-2avx cinebench/blender temps. avx voltages in prime. ll D15 ll Z390 taichi ult 1.60 bios fixed LLC voltage gaps ll gskill 2x8gb cl16 ddr4000 bdie 1.42v ll 4x samsung 970 evo 256gb (asus hyper pcie m2 raid card) ll 2x samsung 860 evo 500gb raid 0 (<--i should've went with 1tb nvmemistake of the build) ll EVGA 2080 ti XC (duo fan skinny) 1995//7600 power limited 79C max, stock voltage (really bad ocer) ll Corsair graphite 780T ll EVGA G2 1300w ll Windows 10 Pro ll NEC PA272w (movie, work mon) 2k60 14bit lut ll Predator X27 4k144 hdr (rgb98)

Link to post
Share on other sites
8 minutes ago, xg32 said:

you are literally changing the definition of bottlenecking then crossing it into a price:performance argument. The 9600k is a bottleneck in some games, so is the the 2600x, thats widely accepted by now.

Your definition of a bottleneck means anything that limits performance of another part. That means there's ALWAYS a bottleneck. So sure, you buy a 9900K because it's the best you can do. Now the bottleneck is on the GPU. Now what? You need a new GPU because it's bottlenecked? Ok, now you get the 2080 Ti, and the 9900K is the bottleneck again. Now you go to some product that doesn't exist yet and the GPU is the bottleneck. It goes on and on and on.

 

All I know is that the video I showed shows VERY similar performance between the 9600K and the 9700K (And GPU usage was constantly in the high 90s, AKA NOT A BOTTLENECK). And from my own experience, my 9400F system performs very similarly in The Division 2 to my 9700K system. My 2700X system is worse than both, but still not a "bottleneck." On my 4770K system, I DO see a bottleneck for sure. That one limits my GPU to about 75% usage most of the time.

Link to post
Share on other sites

By definition a bottleneck means that there is performance being limited by some part of the system, a cpu and gpu can bottleneck in the same system in different parts of the same game. It doesn't matter if it is only a 3% difference if a system has performance being limited by a component at a given time then it is technically a bottleneck.

Not that i disagree with your assessments when factoring in the cost vs performance of each part but that is really a different discussion than a simple yes or no question of is something a bottleneck. 

Also lower threaded cpus like the 9600K can have detrimental effects and cause frame time spikes in certain games under certain conditions when compared to higher thread count cpus and they can make for a much less pleasant experience. 

Link to post
Share on other sites
45 minutes ago, nadnerbaedlig said:

By definition a bottleneck means that there is performance being limited by some part of the system, a cpu and gpu can bottleneck in the same system in different parts of the same game. It doesn't matter if it is only a 3% difference if a system has performance being limited by a component at a given time then it is technically a bottleneck.

I'm just going by what Jay and Steve have said in their various videos. They have both said at one point or another that a computing bottleneck would be when a part is limited by another part by 15% - 20% or more. Otherwise you can just throw around the word bottleneck at anything and scare everyone with that buzzword.

 

45 minutes ago, nadnerbaedlig said:

Also lower threaded cpus like the 9600K can have detrimental effects and cause frame time spikes in certain games under certain conditions when compared to higher thread count cpus and they can make for a much less pleasant experience. 

That's true on 4c/4t parts, and even then it's only in some games. Across the board they are still largely fine. We have not seen any "bad" lows with 6c/6t. We eventually will, but there's no telling how far into the future that will be. It took nearly 10 years for 4c/4t to start to have issues and we only just made the shift to 6c/8c parts.

 

Here's some data on averages and lows from Tom's:

 

https://www.tomshardware.com/reviews/intel-core-i5-9600k-coffee-lake-cpu,5922-4.html

 

Notice how the 9600K pretty much matches the 9700K and 9900K in 1% lows in most games. The one exception was hitman 2, where it fell behind by just under 10% on 1% lows, but again, in both cases the drop from 135 FPS to 83 or 75 FPS would be felt equally on the 9900K and the 9600K.

Link to post
Share on other sites

You are talking about the severity of a bottleneck , the term bottleneck describes the underlying limiting behavior. I am simply indicating to you the definition of the word bottleneck . I wouldn't say that a 3% bottleneck is something to worry about but it fits the definition nonetheless just the same as a 15% bottleneck which obviously would be a greater cause for concern. 

Showing a couple of games where a 6core cpu doesn't substantially limit performance does not refute my conditional statement . I would say for most people and most games a 9600 would be fine but that is a subjective assessment  which an individual would need to determine based on their own use case . It does not mean that in certain games you would not have problems .

https://www.gamersnexus.net/hwreviews/3407-intel-i5-9600k-cpu-review-vs-2700-2600-8700k

please direct your attention to the data on far cry 5 for one example of what i am talking about . I have noticed similar behavior in other games in my own testing of a 9600k vs higher thread count cpus . 

The article linked is also a good illustration of why 1% lows ,as you quoted, do not tell the whole story of end user performance. Frametime plots are more reliable generally but the way they are presented in the toms articles are too condensed as too be difficult to assess.  

Link to post
Share on other sites
27 minutes ago, nadnerbaedlig said:

please direct your attention to the data on far cry 5 for one example of what i am talking about . I have noticed similar behavior in other games in my own testing of a 9600k vs higher thread count cpus . 

That data has been discussed to death. Steve himself said that his Far Cry results needed further investigation in his video review. Other tech sites have proven that data incorrect as seen here:

 

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS8yL0wvODA4Nzk3L29yaWdpbmFsL0Zhci1DcnktNS1GUFMtMTkyMHgxMDgwLURYMTEtVWx0cmEucG5n.jpg.022c4e694a2c64fa7d46a355e99c49e4.jpg102144.png.a5eaa753aba13f6772f695bf876e6d35.png102145.png.5fd13168f0a9cbf080e7adf04012f22e.png

 

As you can see the 1% lows here are far higher than Steve's data using the same GPU. 

 

To further prove that his data was flawed for that run, here's an FCAT chart of the 9700K/1080 Ti for the Far Cry 5 benchmark:

 

FCAT.png.4c80502d9f7a66969e7fea38e2b8df12.png.12bf20a311a4be7e4b3316ee0fadd685.png.d024485caebf14a80b1d1e6bdf13cec5.png

 

vs. his for the 9700K:

 

intel-i7-9700k-fc5-frametimes_all.png.64848d44b8ff82a083c8f13623cc363f.png

 

He did the same benchmark run as me. My chart does NOT have those spikes on the same 9700K.

 

Also note this chart:

 

far-cry-5-frametime-9600k_only-all.png.f5709f968af0de0d18ef344383110bd1.png

 

The 9600K only spiked on 2 of the 4 runs. That's pretty inconsistent and likely points to a problem on the test bench.

 

Here's where Steve says that the data needs more runs to figure out what's going on and to see if it's really a pattern:

 

 

27 minutes ago, nadnerbaedlig said:

You are talking about the severity of a bottleneck , the term bottleneck describes the underlying limiting behavior. I am simply indicating to you the definition of the word bottleneck . I wouldn't say that a 3% bottleneck is something to worry about but it fits the definition nonetheless just the same as a 15% bottleneck which obviously would be a greater cause for concern


Yes, you're just focusing in on the general definition of the word bottleneck which has applications in various real world data points including science, human productivity, traffic, etc. In computing it's generally accepted that you'd call something a "bottleneck" at a certain level of bottlenecking. What that actual number should be is highly subjective, but I'd say 15% or 20% is fair. 

Link to post
Share on other sites
3 hours ago, Puffing said:

Then why did they just say intel was better? If this list is true it would be between the 2700X and the 9700k

It doesn't matter. When all is said and done the two cpu will provide almost identical real world performance in the mix you have described.


80+ ratings certify electrical efficiency. Not quality.

 

Link to post
Share on other sites

Again i said that in certain situations in certain games which is supported by the linked article. It also stipulates that 4 test passes were performed not 1 as you say.

Showing data which does not show the same performance loss does not "Prove data incorrect" , it simply shows more data gathered under different conditions .

It seems to me you are arguing just to do so as no amount of data showing situations where game performance "pretty much matches" proves my aforementioned statement incorrect. 

For example frame time performance can be detrimentally effected on a lower threaded cpu when background applications are running in the os . This would be a plausible scenario in which performance is less. 

Logically some games use more than 6 threads therefore a processor with only 6 threads will perform lower than one able to utilize more than that . 

You can argue to what degree these things matter and whether that performance difference is worth a cost difference but it does not take away from the fact that performance is less. 

 

 

Link to post
Share on other sites
9 minutes ago, nadnerbaedlig said:

Again i said that in certain situations in certain games which is supported by the linked article. It also stipulates that 4 test passes were performed not 1 as you say. 

Showing data which does not show the same performance loss does not "Prove data incorrect" , it simply shows more data gathered under different conditions .

It seems to me you are arguing just to do so as no amount of data showing situations where game performance "pretty much matches" proves my aforementioned statement incorrect. 

For example frame time performance can be detrimentally effected on a lower threaded cpu when background applications are running in the os . This would be a plausible scenario in which performance is less.

 

Look again at all the data I put in the post above: it's a lot. I also pointed out on the FCAT chart that only 2 of the 4 runs exhibited the spiking, which points to inconsistency and a problem on the test bench. I also showed another Far Cry 5 FCAT chart on the 9700K vs his chart on the 9700K, which, again, do not have those spikes. That first chart has passed through 5 runs, by the way. Finally, I posted the video version of that review in which Steve says that the data is inconclusive and needs further testing to establish whether or not it is a pattern. The other sites with the lows also show that something was wrong, because they all use the same scene: the in-game benchmark.

 

9 minutes ago, nadnerbaedlig said:

Logically some games use more than 6 threads therefore a processor with only 6 threads will perform lower than one able to utilize more than that . 

You can argue to what degree these things matter and whether that performance difference is worth a cost difference but it does not take away from the fact that performance is less. 

Like Assassin's Cred Origins, which uses 12 threads, yet:

 

UH.thumb.png.f9c9458f38700540556b01e7028921e6.png

 

Despite the 6 thread deficit, the 8600K ties in both average and 1% lows.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Buy VPN

×