Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Clanscorpia

AMD New Horizon

Recommended Posts

8 hours ago, porina said:

AMD want to give the impression that Ryzen 3.4 GHz = 6900k stock, and will probably undercut it by some not insignificant amount on launch and trumpet that. Of course, the full statement should include the phrase "in this Blender test." This then swings around to another question, how does the AMD Blender test actually behave? I tried to answer that with limited testing. I put a longer post in another thread but the summary would be:

 

  • Ram speed doesn't seem to make a significant difference (<1%, 6700k 4.2 GHz, 2666 vs 2133 dual channel dual rank. I didn't try crippling the ram further)
  • 6MB to 8MB L3 cache doesn't seem to make a difference (<1%, 6600k vs 6700k HT off)
  • HT gives a significant boost, about 52% faster (or, takes 66% of the time compared to HT off)
  • Haswell to Broadwell, and Broadwell to Haswell gave about 3% IPC boost each. Caution limited test samples, I'll see if I can use the other linked sheet to verify this.
  • Pentium G4400 IPC 4% lower than 6600k. The G4400 is Skylake without AVX extensions, suggesting those are not used as I'd expect a much bigger difference if it was.

So although this Blender test does use all available threads, it doesn't seem to be a particularly heavy stress.

I actually ran a slew of tests myself, with HT and various ram speeds. Blender was one of the tests I used. 

On 12/11/2016 at 0:27 AM, MageTank said:

The gauntlet is complete! I'll let you guys extrapolate the results, as I am too tired to do so myself. I'll go over whatever you guys conclude in the morning. Without further ado, the results:

 

Stock i7 6700k HT Off 2133mhz Ram: http://imgur.com/a/YeY9f  http://valid.x86.fr/3a9v8e

Stock i7 6700k HT Off 3200mhz Ram: http://imgur.com/a/O52eP   http://valid.x86.fr/fh77ve

Stock i7 6700k HT On 2133mhz Ram: http://imgur.com/a/jSnIm  http://valid.x86.fr/li4nvu

Stock i7 6700k HT On 3200mhz Ram: http://imgur.com/a/5jIuH  http://valid.x86.fr/kb5895

 

If there is any additional benchmarks you guys need me to do, let me know. I can't offer much experience when it comes to CPU's themselves, but anything memory related seems to come easy to me, so feel free to exploit that if applicable. 

From 2133 to 3200, ram made no difference in the blender tests. This is 2133 C15-15-15-35-2, to 3200 C14-14-14-28-2. We are talking going from 34GB/s and 60ns, to 46GB/s and 46ns. That's a drastic difference in ram performance, for it to make zero difference in Blender. I went back to test it with my 3600 C14-14-14-28-2 profile with tight tertiary timings, and the result was still the same.

 

In my Blender test, HT gave me a 49.5% boost, compared to running it without HT. I did use an entirely different file (Using the Blenchmark106 SceneV3 file)


My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to post
Share on other sites
7 hours ago, Astronautical said:

Why did they crop the intel screen so you cant see the programs running dure in the benchmark? Not to be against amd but thats suspicious to me.

You can run the benchmarks yourself lol... What would they have to gain from obfuscating data?

Link to post
Share on other sites
6 hours ago, porina said:

My investigation was more testing the test, rather than testing hardware. By working out how the benchmark behaves, we can start to understand where the demo result applies, and where it might not. Because Ryzen is claimed to perform well in this task, doesn't mean it will do so for all types of tasks.

They used 2 different benchmarks, one with integers and one with floating point. The CPU was also intentionally crippled. The only thing that could possibly be skewed is that this is just incredibly binned silicon for the power draw...

Link to post
Share on other sites
56 minutes ago, Brian McKee said:

They used 2 different benchmarks, one with integers and one with floating point. The CPU was also intentionally crippled. The only thing that could possibly be skewed is that this is just incredibly binned silicon for the power draw...

If anything it looks like they were trying to skew things in favor of the intel chip (at least in the benches shown).

 

Really want to see how well these things clock.  I don't really see another bulldozer happening, so now we're waiting to find out if we have a merely competitive CPU, or a genuine monster on our hands.

 

Even more interested in how a lower core-count model performs.  The 6700k comparison pretty much just showed us the power of moar coars for that workload.


SFF-ish:  Ryzen 5 1600X, Asrock AB350M Pro4, 16GB Corsair LPX 3200, Sapphire R9 Fury Nitro -75mV, 512gb Plextor Nvme m.2, 512gb Sandisk SATA m.2, Cryorig H7, stuffed into an Inwin 301 with rgb front panel mod.  LG27UD58.

 

Aging Workhorse:  Phenom II X6 1090T Black (4GHz #Yolo), 16GB Corsair XMS 1333, RX 470 Red Devil 4gb (Sold for $330 to Cryptominers), HD6850 1gb, Hilariously overkill Asus Crosshair V, 240gb Sandisk SSD Plus, 4TB's worth of mechanical drives, and a bunch of water/glycol.  Coming soon:  Bykski CPU block, whatever cheap Polaris 10 GPU I can get once miners start unloading them.

 

MintyFreshMedia:  Thinkserver TS130 with i3-3220, 4gb ecc ram, 120GB Toshiba/OCZ SSD booting Linux Mint XFCE, 2TB Hitachi Ultrastar.  In Progress:  3D printed drive mounts, 4 2TB ultrastars in RAID 5.

Link to post
Share on other sites
2 hours ago, porina said:

My investigation was more testing the test, rather than testing hardware. By working out how the benchmark behaves, we can start to understand where the demo result applies, and where it might not. Because Ryzen is claimed to perform well in this task, doesn't mean it will do so for all types of tasks.

If you mean, does it use specific features of CPUs for performance boost, then I can't say with any certainty. From my testing I can only say it doesn't seem to be taking advantage of AVX. In other forums someone had compiled a modified version of Blender with AVX support and that gives a significant time reduction. I haven't looked at that in detail.

Vectorizing the workload will naturally speed it up unless you are a shit programmer. So it's not really that "shocking" that it increases performance noticeably. Heck, even AMD FX would be notably better at gaming if games were properly vectorized.

Link to post
Share on other sites
2 hours ago, MageTank said:

In my Blender test, HT gave me a 49.5% boost, compared to running it without HT. I did use an entirely different file (Using the Blenchmark106 SceneV3 file)

Since there is some run to run variation, even if I pick the best of several runs, I was wondering if perhaps the best HT off result wasn't as good as it could be. However I saw this at both ram speeds tested getting 52% in one, 53% in the other. I don't intend to analyse this further but thought I should mention it anyway.


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance LPX RGB 3000 2x8GB, Gigabyte RTX 2070, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 3600, Noctua D9L, Corsair Vengeance LPX RGB 3000 2x4GB, EVGA GTX 970, Corsair CX450M, NZXT Manta, Crucial MX300 525GB, Acer RT280K

VR rig: Asus Z170I Pro Gaming, i7-6600k stock, Silverstone TD03-E, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB SSD

Total CPU heating: i7-7800X, i7-5930k, i7-5820k, 2x i7-6700k, i7-6700T, i5-6600k, i7-5775C, i5-5675C, i5-4570S, i3-8350k, i3-6100, i3-4360, i3-4150T, E5-2683v3, 2x E5-2650, E5-2667, R7 3700X, R5 3600

Link to post
Share on other sites
2 hours ago, MageTank said:

I actually ran a slew of tests myself, with HT and various ram speeds. Blender was one of the tests I used. 

 

From 2133 to 3200, ram made no difference in the blender tests. This is 2133 C15-15-15-35-2, to 3200 C14-14-14-28-2. We are talking going from 34GB/s and 60ns, to 46GB/s and 46ns. That's a drastic difference in ram performance, for it to make zero difference in Blender. I went back to test it with my 3600 C14-14-14-28-2 profile with tight tertiary timings, and the result was still the same.

 

In my Blender test, HT gave me a 49.5% boost, compared to running it without HT. I did use an entirely different file (Using the Blenchmark106 SceneV3 file)

 

Any cache testing?

 

That's a huge difference in the latency and availability of bandwidth between your 2133 and 3200 configurations.

 

I'm also interested in what you would have notice between single channel and dual.  Seems like single channel would definitely lead to some wasted CPU cycles.

 

26 minutes ago, Phate.exe said:

If anything it looks like they were trying to skew things in favor of the intel chip (at least in the benches shown).

 

 

What makes you think this?

 


CPU: i9 7900X  |  Motherboard: Asus ROG Rampage VI Apex |  GPUs: 2 x EVGA GTX 1080 Ti  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)  

Storage: 2 x Samsung 960 Evo NVMe (RAID 0)  |  4 x Samsung 850 EVO (RAID 0)  |  PSUEVGA SuperNOVA 1600 T2

Cooling: Custom Loop  5 x EK 360mm rads  |  2 x EK D5 PWM pumps  |  EK GPU blocks | Aqua Computer Cuplex Kryos NEXT CPU block

Case: Caselabs Mercury S8 w/ Pedestal

 

CPU: Threadripper 1950x  |  Motherboard: Asus ROG Zenith Extreme  |  GPU: 3 x EVGA GTX 1080 Ti  +  2 x EVGA GTX 1080  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)

Storage:  2 x Samsung 950 Pro NVMe (RAID 0)  |  Samsung 840 Evo SSD  | PSU: Seasonic Platinum 1200w

Cooling:  Custom Loop  1 x EK XE 480mm / 1 x EK PE 360mm  |  EK D5 PWM pump  |  EK CPU & GPU blocks 

Case: Caselabs Mercury SM8

Link to post
Share on other sites

CHILDREN, I thought we settled this. No one was skewing anything towards anyone. Plsopls just let this thread die. You can't really extrapolate any Ryzen perf info from 2 benchmark runs (1 each) and now people are picking apart blender as if it hasnt existed for years.

Everyone just be patient and wait for the final fucking product to come out.

Link to post
Share on other sites
1 minute ago, Swatson said:

CHILDREN, I thought we settled this. No one was skewing anything towards anyone. Plsopls just let this thread die. You can't really extrapolate any Ryzen perf info from 2 benchmark runs (1 each) and now people are picking apart blender as if it hasnt existed for years.

Everyone just be patient and wait for the final fucking product to come out.

 

Very effective post.  I completely understand why you think everyone else is a child.


CPU: i9 7900X  |  Motherboard: Asus ROG Rampage VI Apex |  GPUs: 2 x EVGA GTX 1080 Ti  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)  

Storage: 2 x Samsung 960 Evo NVMe (RAID 0)  |  4 x Samsung 850 EVO (RAID 0)  |  PSUEVGA SuperNOVA 1600 T2

Cooling: Custom Loop  5 x EK 360mm rads  |  2 x EK D5 PWM pumps  |  EK GPU blocks | Aqua Computer Cuplex Kryos NEXT CPU block

Case: Caselabs Mercury S8 w/ Pedestal

 

CPU: Threadripper 1950x  |  Motherboard: Asus ROG Zenith Extreme  |  GPU: 3 x EVGA GTX 1080 Ti  +  2 x EVGA GTX 1080  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)

Storage:  2 x Samsung 950 Pro NVMe (RAID 0)  |  Samsung 840 Evo SSD  | PSU: Seasonic Platinum 1200w

Cooling:  Custom Loop  1 x EK XE 480mm / 1 x EK PE 360mm  |  EK D5 PWM pump  |  EK CPU & GPU blocks 

Case: Caselabs Mercury SM8

Link to post
Share on other sites
8 minutes ago, done12many2 said:

 

What makes you think this?

 

The fact that turbo boost was enabled on the intel chip and the AMD chip was restricted to base clock only.

Link to post
Share on other sites
Just now, Brian McKee said:

The fact that turbo boost was enabled on the intel chip and the AMD chip was restricted to base clock only.

 

So you consider the fact that Turbo was enabled on the 6900k to mean that it had a clock speed advantage over the Ryzen at 3.4 GHz?


CPU: i9 7900X  |  Motherboard: Asus ROG Rampage VI Apex |  GPUs: 2 x EVGA GTX 1080 Ti  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)  

Storage: 2 x Samsung 960 Evo NVMe (RAID 0)  |  4 x Samsung 850 EVO (RAID 0)  |  PSUEVGA SuperNOVA 1600 T2

Cooling: Custom Loop  5 x EK 360mm rads  |  2 x EK D5 PWM pumps  |  EK GPU blocks | Aqua Computer Cuplex Kryos NEXT CPU block

Case: Caselabs Mercury S8 w/ Pedestal

 

CPU: Threadripper 1950x  |  Motherboard: Asus ROG Zenith Extreme  |  GPU: 3 x EVGA GTX 1080 Ti  +  2 x EVGA GTX 1080  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)

Storage:  2 x Samsung 950 Pro NVMe (RAID 0)  |  Samsung 840 Evo SSD  | PSU: Seasonic Platinum 1200w

Cooling:  Custom Loop  1 x EK XE 480mm / 1 x EK PE 360mm  |  EK D5 PWM pump  |  EK CPU & GPU blocks 

Case: Caselabs Mercury SM8

Link to post
Share on other sites
1 minute ago, done12many2 said:

 

So you consider the fact that Turbo was enabled on the 6900k to mean that it had a clock speed advantage over the Ryzen at 3.4 GHz?

Um... yes? Because 3.4 GHz is the BASE clock of Ryzen, meaning it goes higher. At least I hope it would.

Link to post
Share on other sites
1 minute ago, Brian McKee said:

Um... yes? Because 3.4 GHz is the BASE clock of Ryzen, meaning it goes higher. At least I hope it would.

 

You do realize that with Intel Turbo, clock speed scales down as core utilization increases, right?  So in a test like Blender or Handbrake, a 6900k with Turbo enabled drops from 3.7 GHz to 3.4 GHz when all cores are loaded.  It was definitely a smart move for AMD to repeatedly mention (7 times during the event) the fact that the 6900k was running with Turbo "up to 3.7", despite the fact that it wasn't actually running at that during their test comparison.

 

Regardless, they've peaked my interest in the Ryzen chip.  The Extended Range Frequency stuff sounds pretty cool if it's not gimped. 


CPU: i9 7900X  |  Motherboard: Asus ROG Rampage VI Apex |  GPUs: 2 x EVGA GTX 1080 Ti  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)  

Storage: 2 x Samsung 960 Evo NVMe (RAID 0)  |  4 x Samsung 850 EVO (RAID 0)  |  PSUEVGA SuperNOVA 1600 T2

Cooling: Custom Loop  5 x EK 360mm rads  |  2 x EK D5 PWM pumps  |  EK GPU blocks | Aqua Computer Cuplex Kryos NEXT CPU block

Case: Caselabs Mercury S8 w/ Pedestal

 

CPU: Threadripper 1950x  |  Motherboard: Asus ROG Zenith Extreme  |  GPU: 3 x EVGA GTX 1080 Ti  +  2 x EVGA GTX 1080  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)

Storage:  2 x Samsung 950 Pro NVMe (RAID 0)  |  Samsung 840 Evo SSD  | PSU: Seasonic Platinum 1200w

Cooling:  Custom Loop  1 x EK XE 480mm / 1 x EK PE 360mm  |  EK D5 PWM pump  |  EK CPU & GPU blocks 

Case: Caselabs Mercury SM8

Link to post
Share on other sites
2 minutes ago, done12many2 said:

 

You do realize that with Intel Turbo, clock speed scales down as core utilization increases, right?  So in a test like Blender or Handbrake, a 6900k with Turbo enabled drops from 3.7 GHz to 3.4 GHz when all cores are loaded.  It was definitely a smart move for AMD to repeatedly mention (7 times during the event) the fact that the 6900k was running with Turbo "up to 3.7", despite the fact that it wasn't actually running at that during their test comparison.

 

Regardless, they've peaked my interest in the Ryzen chip.  The Extended Range Frequency stuff sounds pretty cool if it's not gimped. 

Ok yeah then let's run the tests at the 6900k's base clock of 3.2 ghz. I don't see your point.

Link to post
Share on other sites
1 minute ago, Brian McKee said:

Ok yeah then let's run the tests at the 6900k's base clock of 3.2 ghz. I don't see your point.

 

During the test, they were running the same clock speed.  I understand that you didn't realize that at first, but there's no need to now say that they should have run the Intel slower than the AMD during the testing. That just doesn't make sense.


CPU: i9 7900X  |  Motherboard: Asus ROG Rampage VI Apex |  GPUs: 2 x EVGA GTX 1080 Ti  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)  

Storage: 2 x Samsung 960 Evo NVMe (RAID 0)  |  4 x Samsung 850 EVO (RAID 0)  |  PSUEVGA SuperNOVA 1600 T2

Cooling: Custom Loop  5 x EK 360mm rads  |  2 x EK D5 PWM pumps  |  EK GPU blocks | Aqua Computer Cuplex Kryos NEXT CPU block

Case: Caselabs Mercury S8 w/ Pedestal

 

CPU: Threadripper 1950x  |  Motherboard: Asus ROG Zenith Extreme  |  GPU: 3 x EVGA GTX 1080 Ti  +  2 x EVGA GTX 1080  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)

Storage:  2 x Samsung 950 Pro NVMe (RAID 0)  |  Samsung 840 Evo SSD  | PSU: Seasonic Platinum 1200w

Cooling:  Custom Loop  1 x EK XE 480mm / 1 x EK PE 360mm  |  EK D5 PWM pump  |  EK CPU & GPU blocks 

Case: Caselabs Mercury SM8

Link to post
Share on other sites
Just now, done12many2 said:

 

During the test, they were running the same clock speed.  I understand that you didn't realize that at first, but there's no need to now say that they should have run the Intel slower than the AMD during the testing. That just doesn't make sense.

It's testing base vs base, they're not the same architecture so same clock testing really doesn't mean anything either. Face it, AMD was crippled because it was base clock only vs complete out of box performance. 

Link to post
Share on other sites

 

29 minutes ago, Brian McKee said:

The fact that turbo boost was enabled on the intel chip and the AMD chip was restricted to base clock only.

 

2 minutes ago, Brian McKee said:

It's testing base vs base, they're not the same architecture so same clock testing really doesn't mean anything either. 

 

I like how you can pretend that you didn't just say something and switch positions mid way through a conversation.  You distinctly said that the Intel chip had an advantage due to "turbo boost".  I pointed out that it didn't and now you're talking about something completely different.  I'll leave you to this conversation.


CPU: i9 7900X  |  Motherboard: Asus ROG Rampage VI Apex |  GPUs: 2 x EVGA GTX 1080 Ti  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)  

Storage: 2 x Samsung 960 Evo NVMe (RAID 0)  |  4 x Samsung 850 EVO (RAID 0)  |  PSUEVGA SuperNOVA 1600 T2

Cooling: Custom Loop  5 x EK 360mm rads  |  2 x EK D5 PWM pumps  |  EK GPU blocks | Aqua Computer Cuplex Kryos NEXT CPU block

Case: Caselabs Mercury S8 w/ Pedestal

 

CPU: Threadripper 1950x  |  Motherboard: Asus ROG Zenith Extreme  |  GPU: 3 x EVGA GTX 1080 Ti  +  2 x EVGA GTX 1080  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)

Storage:  2 x Samsung 950 Pro NVMe (RAID 0)  |  Samsung 840 Evo SSD  | PSU: Seasonic Platinum 1200w

Cooling:  Custom Loop  1 x EK XE 480mm / 1 x EK PE 360mm  |  EK D5 PWM pump  |  EK CPU & GPU blocks 

Case: Caselabs Mercury SM8

Link to post
Share on other sites
5 minutes ago, done12many2 said:

 

 

 

I like how you can pretend that you didn't just say something and switch positions mid way through a conversation.  You distinctly said that the Intel chip had an advantage due to "turbo boost".  I pointed out that it didn't and now you're talking about something completely different.  I'll leave you to this conversation.

It IS an advantage, holy shit you're thick headed. If both the products were out of the box performance than by using your BRAIN wouldn't you assume that Ryzen would have the performance edge? If you're implying comparing clocks between entirely different architectures means literally anything at all then you don't know very much. 

 

What we know from the benchmarks right now is that Ryzen at the very least matches the 6900k with lower TDP and without boosting up AT ALL. 

Link to post
Share on other sites
9 minutes ago, Brian McKee said:

It IS an advantage, holy shit you're thick headed. If both the products were out of the box performance than by using your BRAIN wouldn't you assume that Ryzen would have the performance edge? If you're implying comparing clocks between entirely different architectures means literally anything at all then you don't know very much. 

 

What we know from the benchmarks right now is that Ryzen at the very least matches the 6900k with lower TDP and without boosting up AT ALL. 

No it isn't, as the more cores that come under load, the lower the boost speed, all the way down to 3.4GHz, which is what the ryzen sample was running at. That is unless AMD manually tweaked the 6900K to run all cores at their full boost speed.


"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to post
Share on other sites
3 minutes ago, Brian McKee said:

What we know from the benchmarks right now is that Ryzen at the very least matches the 6900k with lower TDP and without boosting up AT ALL. 

What we know from the benchmark right now is that Ryzen at 3.4 GHz matches a stock 6900k in that single Blender test, with lower rated TDP.

 

Absent better info, I'd speculate that the 6900k wouldn't need it's full TDP in that test as it doesn't seem that heavy a load compared to other stuff I run. Actually, that could be tested relatively easily. Proposal: Run that Blender benchmark and observe both reported CPU power usage, and actual total system power usage. For a point of comparison, something like Prime95 could be used as a heavy load case as from memory that is close to running at rated TDP. I'm willing to bet Blender will come out much lower in power consumption. I could try that on the weekend if it is desired on Broadwell (to better match 6900k) as my only system isn't easily accessible at the moment. I could do Haswell or Skylake test more easily before that.

 

There are videos of a press demo they gave separately where they displayed total system power on screen. Ryzen system power was slightly lower than the 6900k system, certainly not the difference in rated TDP.


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance LPX RGB 3000 2x8GB, Gigabyte RTX 2070, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 3600, Noctua D9L, Corsair Vengeance LPX RGB 3000 2x4GB, EVGA GTX 970, Corsair CX450M, NZXT Manta, Crucial MX300 525GB, Acer RT280K

VR rig: Asus Z170I Pro Gaming, i7-6600k stock, Silverstone TD03-E, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB SSD

Total CPU heating: i7-7800X, i7-5930k, i7-5820k, 2x i7-6700k, i7-6700T, i5-6600k, i7-5775C, i5-5675C, i5-4570S, i3-8350k, i3-6100, i3-4360, i3-4150T, E5-2683v3, 2x E5-2650, E5-2667, R7 3700X, R5 3600

Link to post
Share on other sites

 

2 hours ago, Phate.exe said:

If anything it looks like they were trying to skew things in favor of the intel chip (at least in the benches shown).

 

2 hours ago, done12many2 said:

What makes you think this?

 

1 hour ago, Brian McKee said:

The fact that turbo boost was enabled on the intel chip and the AMD chip was restricted to base clock only.

 

1 hour ago, done12many2 said:

You do realize that with Intel Turbo, clock speed scales down as core utilization increases, right?  So in a test like Blender or Handbrake, a 6900k with Turbo enabled drops from 3.7 GHz to 3.4 GHz when all cores are loaded.  It was definitely a smart move for AMD to repeatedly mention (7 times during the event) the fact that the 6900k was running with Turbo "up to 3.7", despite the fact that it wasn't actually running at that during their test comparison.

 

Regardless, they've peaked my interest in the Ryzen chip.  The Extended Range Frequency stuff sounds pretty cool if it's not gimped. 

 

1 hour ago, Brian McKee said:

Ok yeah then let's run the tests at the 6900k's base clock of 3.2 ghz. I don't see your point.

 

1 hour ago, done12many2 said:

 

During the test, they were running the same clock speed.  I understand that you didn't realize that at first, but there's no need to now say that they should have run the Intel slower than the AMD during the testing. That just doesn't make sense.

 

1 hour ago, Brian McKee said:

It's testing base vs base, they're not the same architecture so same clock testing really doesn't mean anything either. Face it, AMD was crippled because it was base clock only vs complete out of box performance. 

 

1 hour ago, done12many2 said:

I like how you can pretend that you didn't just say something and switch positions mid way through a conversation.  You distinctly said that the Intel chip had an advantage due to "turbo boost".  I pointed out that it didn't and now you're talking about something completely different.  I'll leave you to this conversation.

 

1 hour ago, Brian McKee said:

It IS an advantage, holy shit you're thick headed. If both the products were out of the box performance than by using your BRAIN wouldn't you assume that Ryzen would have the performance edge? If you're implying comparing clocks between entirely different architectures means literally anything at all then you don't know very much. 

 

What we know from the benchmarks right now is that Ryzen at the very least matches the 6900k with lower TDP and without boosting up AT ALL. 

 

 

I understand how you are confused and at this point you're all over the place with shifting focus on the topic.  You said that the 6900k had a clock speed advantage due to turbo boost.  I pointed out that they were actually running the test at the same clock speed due to the way Intel Turbo works.  You then switched to a conversation about base clocks, threw out some insults, and then further switched to talking about TDP and whatever comes next as you push away from your initial statement.  

 

I've been wrong plenty of times.  There's just a very big difference in the way you and I handle handle it when it happens.  No biggie.

 


CPU: i9 7900X  |  Motherboard: Asus ROG Rampage VI Apex |  GPUs: 2 x EVGA GTX 1080 Ti  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)  

Storage: 2 x Samsung 960 Evo NVMe (RAID 0)  |  4 x Samsung 850 EVO (RAID 0)  |  PSUEVGA SuperNOVA 1600 T2

Cooling: Custom Loop  5 x EK 360mm rads  |  2 x EK D5 PWM pumps  |  EK GPU blocks | Aqua Computer Cuplex Kryos NEXT CPU block

Case: Caselabs Mercury S8 w/ Pedestal

 

CPU: Threadripper 1950x  |  Motherboard: Asus ROG Zenith Extreme  |  GPU: 3 x EVGA GTX 1080 Ti  +  2 x EVGA GTX 1080  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)

Storage:  2 x Samsung 950 Pro NVMe (RAID 0)  |  Samsung 840 Evo SSD  | PSU: Seasonic Platinum 1200w

Cooling:  Custom Loop  1 x EK XE 480mm / 1 x EK PE 360mm  |  EK D5 PWM pump  |  EK CPU & GPU blocks 

Case: Caselabs Mercury SM8

Link to post
Share on other sites
2 minutes ago, Swatson said:

This is why we can't have nice things

 

The smell of that beard burning! 


CPU: i9 7900X  |  Motherboard: Asus ROG Rampage VI Apex |  GPUs: 2 x EVGA GTX 1080 Ti  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)  

Storage: 2 x Samsung 960 Evo NVMe (RAID 0)  |  4 x Samsung 850 EVO (RAID 0)  |  PSUEVGA SuperNOVA 1600 T2

Cooling: Custom Loop  5 x EK 360mm rads  |  2 x EK D5 PWM pumps  |  EK GPU blocks | Aqua Computer Cuplex Kryos NEXT CPU block

Case: Caselabs Mercury S8 w/ Pedestal

 

CPU: Threadripper 1950x  |  Motherboard: Asus ROG Zenith Extreme  |  GPU: 3 x EVGA GTX 1080 Ti  +  2 x EVGA GTX 1080  |  RAM: 32GB G.Skill TridentZ DDR4 3200Mhz (CL14)

Storage:  2 x Samsung 950 Pro NVMe (RAID 0)  |  Samsung 840 Evo SSD  | PSU: Seasonic Platinum 1200w

Cooling:  Custom Loop  1 x EK XE 480mm / 1 x EK PE 360mm  |  EK D5 PWM pump  |  EK CPU & GPU blocks 

Case: Caselabs Mercury SM8

Link to post
Share on other sites
9 minutes ago, Swatson said:

This is why we can't have nice things

Are you sure you aren't my subconscious ? You have my initials in all 


CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×