Jump to content

Navi/Ryzen 3000 launch Megathread

LukeSavenije
21 minutes ago, TVwazhere said:

@LukeSavenije

 

Some more videos

  Reveal hidden contents

 

Pauls Hardware:

 

GN separately Reviewed the 3900x (and will also do so for the 3700x, I'll let you decide if you want to include that or not)

 

 

 

 

After watching many of these videos, I've come to this conclusion:

 

Intel? Still best for gaming. But (now more than ever) if you do anything else on top of gaming; if you use photoshop or premire (surprisingly, premire works good on AMD now), if you stream, if you do CAD work (hi), if you work in a data center, if you like having money either to spend on other things or just save up for things like retirement, student loans, bills or whatever else life has to make you give them money, if you sneeze in between gaming sessions, then my recommendation is Ryzen 3000 (Especially that 3600. I agree with GN, best value in a CPU by far)

Yeah Agreed.

 

Although I really wouldn't recommend Intel to anyone, even for just gaming.

 

The 9900K is barely faster than the 3700X / 3900X, and runs way hotter / uses more power. Also, for future use, the 3900X is only going to benefit from the 12 cores / 24 threads, as even more and more applications and games start taking advantage of more cores. So if you have $500 to spend, I would still get the 3900X over the 9900K for games.

 

Really absolutely no reason to buy Intel anymore, especially considering you don't get a boxed cooler, and you have to spend money on a Z motherboard and K cpu for OC'ing. Intel is really going to have to drop prices significantly to maintain market share at this point.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm really interested in the 3900x. I'm wondering if an upgrade to this from the 8700k is currently worth it for me personally. I'm going to sit it out a little a see what people are saying over the coming few weeks.

AMD Ryzen 9 5900X - Nvidia RTX 3090 FE - Corsair Vengeance Pro RGB 32GB DDR4 3200MHz - Samsung 980 Pro 250GB NVMe m.2 PCIE 4.0 - 970 Evo 1TB NVMe m.2 - T5 500GB External SSD - Asus ROG Strix B550-F Gaming (Wi-Fi 6) - Corsair H150i Pro RGB 360mm - 3 x 120mm Corsair AF120 Quiet Edition - 3 x 120mm Corsair ML120 - Corsair RM850X - Corsair Carbide 275R - Asus ROG PG279Q IPS 1440p 165hz G-Sync - Logitech G513 Linear - Logitech G502 Lightsync Wireless - Steelseries Arctic 7 Wireless

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, rcmaehl said:

 

 

competitors are friends in different departments! is the world healing itself?

I live in misery USA. my timezone is central daylight time which is either UTC -5 or -4 because the government hates everyone.

into trains? here's the model railroad thread!

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, maartendc said:

Really absolutely no reason to buy Intel anymore, especially considering you don't get a boxed cooler

I really dont see this as a positive for the 3900X, considering I saw coolers used for testing ranging anywhere from NH-U12A, NH-D15S (single fan), 280mm and 360mm AIO's, and all were holding the CPU's in the mid 80's roughly. (Some were OC'd, some were not)

 

The 3700 makes more sense especially if you're not touching CPU frequencies or going for an all core turbo boost. 

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

 RGB Build Post 2019 --- Rainbow 🦆 2020 --- Velka 5 V2.0 Build 2021

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018 --- Project ITNOS

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX750M    Cooling Cryorig H7 with NF-A12x25

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, will4623 said:

competitors are friends in different departments! is the world healing itself?

Not the first time nVIDIA has done this actually. Since AMD(Ryzen) and AMD(Radeon) are two different branches of the company, they technically dont compete with eachother and therefore nVIDIA often promotes Ryzen launches. 

 

On the flip side, Intel and AMD(radeon) have joined together in NUCs, so these collaborations are not totally "ITS RAINING CATS AND DOGS" crazy anymore.

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

 RGB Build Post 2019 --- Rainbow 🦆 2020 --- Velka 5 V2.0 Build 2021

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018 --- Project ITNOS

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX750M    Cooling Cryorig H7 with NF-A12x25

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, TVwazhere said:

I really dont see this as a positive for the 3900X, considering I saw coolers used for testing ranging anywhere from NH-U12A, NH-D15S (single fan), 280mm and 360mm AIO's, and all were holding the CPU's in the mid 80's roughly. (Some were OC'd, some were not)

 

The 3700 makes more sense especially if you're not touching CPU frequencies or going for an all core turbo boost. 

Well I would expect the boxed cooler supplied would be able to cool the CPU effectively at least:

 

The 3700X and 3900X look to use as much power as a 2600 and 2700X respectively. The AMD boxed coolers for the 2000 series were quite good (Wraith Prism and Wraith Spire etc). They are decent mid-range coolers. Won't get you the very best temps or fastest OC, but get the job done. For most people that is enough. I would expect you could effectively cool a 3900X with a Wraith Prism, just like the 2700X.

 

The fact that reviewers mostly use these beefy coolers is probably that they want to keep coolers consistent between tests, or exclude variables as much as possible. So they can compare temperatures and such between CPU's, or not hold a CPU boost clock back because of the cooler. It doesn't mean the stock cooler isn't sufficient for normal use.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/7/2019 at 10:08 AM, S w a t s o n said:

You mean an entire refresh where it was possible but not always. And I said almost always. And when you couldnt it was like 100MHz 1 core vs all core. Wow you got me???
 

Ryzen 1 was like 50/50 and was definitely capped much harder than Zen2 here. Zen2 has the advantage. To look at Zen1/+ and say "oh but look your all core is 100mhz lower than single core XFR" on a chip that only does 4GHZ in the first place isnt really shutting down my argument

I would see my r7 2700x boost to 4.4 ghz on a single core but you couldn't get an overclock above 4.2 ghz. The cpus are hitting the same exponential voltage wall that they did with every other generation the only difference is that it is at 4.3 instead of 4.2. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, maartendc said:

 

 

Really absolutely no reason to buy Intel anymore, especially considering you don't get a boxed cooler, and you have to spend money on a Z motherboard and K cpu for OC'ing. Intel is really going to have to drop prices significantly to maintain market share at this point.

No one is taking that into consideration. Look at the i5 9600K, there is literally no point in buying that CPU any more, once you consider you need to buy that cooler, and the fact that the majority of people on that budget will be running a mid tier GPU. There is no point in buying a CPU that is more expensive, trades blows with the 3600, and on top of that, it sucks in multicore workloads. If the CPU budget is $300, there is no justifiable reason, IMO, to go with anything but AMD. You get the same performance, it's cheaper, and you can go all the way up to 16 cores 32 threads on ONE motherboard. That's just insane.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Brooksie359 said:

I would see my r7 2700x boost to 4.4 ghz on a single core but you couldn't get an overclock above 4.2 ghz. The cpus are hitting the same exponential voltage wall that they did with every other generation the only difference is that it is at 4.3 instead of 4.2. 

Disagree that it's the same voltage/frequency wall. We can see the cores are not limited to 4.3, 3950x is listed at 4.7 and XFR boost clocks are being hit when people actually turn on PBO

 

Rather we see that the socket simply cannot provide any more power as it's hitting either current or power limit. The cores simply can't draw the energy needed to all switch at higher than 4.3

 

This could be because the heat output at higher than 4.3 all core is simply too much in such a small package. 16 cores on threadripper could be 4 chiplets with 4 cores enabled, allowing better heat density over each chiplets and also over the package as a whole. Or maybe AMD just put a 140w limit to troll $700 mobo buyers because overclocking does nothing right now on the 3900x

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

Statement from AMD rep, taking from overclockers.com review:

I wonder if EPYC will have lower power SKUs with the same half write and higher power ones with full write, I can see both fitting in rather well but I would think it odd if there were no full write options at all.

 

The need to lower the IOD power draw is actually important, it's very high (15W-30W) and for the 65W parts it really eats in to the power budget. The 95W 3600X may be slightly less pointless this generation than previously, mostly due to the IOD power draw.

 

3700X_power.png

Imagine how much lower the all core clocks would be if the IOD was full write and 20W-25W instead of 15.83W.

 

What I find interesting here is that as the core load increases the IOD power allocation decreases, which means something must be giving way to allow for that so does that mean IF or IMC performance slightly decreases at maximum core load compared to lesser loads?

 

3900X_power.png

 

So from the above charts what I gather is that it requires about 15W per core to do 4.4GHz and above 17W per core to do 4.6GHz, that's a pretty big increase for 200MHz. 4.1GHz is 9.5W-10W per core, meaning overall to increase the core clocks by 500MHz requires a per core power increase of 7W-8W which is an 80% increase in power. Couple all this with games that may not leave cores unloaded like the above test does further eating away at the power budget for the cores lowering the attainable boost.

 

I'm going to go out on a limb and predict that the 3600X will actually do all core 4.4GHz, basing that on the 95W TDP being 125W PPT with the IOD using 17W leaving 18W per core. Fully prepared to look like an ass and be wrong. Equally I'm going to predict the 3800X will do 4.4GHz with a decent chance of manual vcore and OC to do 4.5GHz. All assuming the die is capable of having that many cores all at those higher clocks.

 

Ryzen 4000 could give a decent increase in boost clock performance just by lowering the IOD power alone. If AMD could cut it from 23.41W to 14W the all core boost could pick up by 100MHz-200MHz just from that.

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, maartendc said:

Well I would expect the boxed cooler supplied would be able to cool the CPU effectively at least:

 

The 3700X and 3900X look to use as much power as a 2600 and 2700X respectively. The AMD boxed coolers for the 2000 series were quite good (Wraith Prism and Wraith Spire etc). They are decent mid-range coolers. Won't get you the very best temps or fastest OC, but get the job done. For most people that is enough. I would expect you could effectively cool a 3900X with a Wraith Prism, just like the 2700X.

The cooler that came with the 2600 was awful. They really cut the budget compared to 1st gen as the 1600 cooler was much more substantial. With the bundled cooler, the 2600 ran so hot it rarely hit the higher turbo clocks. Replacing it with a Noctua NH-D9L air cooler, which I consider lower-mid range, allows it to mostly hit maximum boost and sustain it. At least if the weather is cooperating as it is hot now. It was dropping a bit during some pre-testing last night, before I swap out the 2600 with a 3600.

 

I haven't actually paid attention to what cooler comes with the 3600... will find out tomorrow if I don't look it up before.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

@leadeater

That's quite significant change. It starts boosting faster and higher. I wonder how this affects games in particular, because 3900X doesn't seem to be boosting as high as we were hoping.

Link to comment
Share on other sites

Link to post
Share on other sites

From 4.2-4.3 to a nearly sustained 4.5GHz ?  

 

That's a 4% increase in raw speed.  If that translates to 4% higher benchmarks and framerates, things aren't looking too good for Intel.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, RejZoR said:

@leadeater

That's quite significant change. It starts boosting faster and higher. I wonder how this affects games in particular, because 3900X doesn't seem to be boosting as high as we were hoping.

Not sure, from what I can tell GN review was done with this newer AGESA version. GN states they were told to use bios version FC5 and looking at the Gigabyte website F5e is AGESA 1.0.0.3AB. The Important part is being at least 1.0.0.3.a not 1.0.0.3.

https://www.gigabyte.com/Motherboard/X570-AORUS-MASTER-rev-10#support-dl-bios

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, porina said:

It will be interesting to see what does happen. Does the 10nm production of mobile CPUs take some pressure off their fabs for 14nm parts? I don't know, are they still in constraint? Pricing certainly has relaxed somewhat. On a parallel note, I wonder what AMD's production capability is of their CPUs. It is still early days but I've yet to see stock of anything other than the 3600.

 

As a more wild thought, could we even get to a similar situation with DRAM/flash? The over-under supply cycles causing pricing to go up and down. 

I think the constraints have eased up, not sure about AMD production but given the lack of product in shops around here I am predicting it's not covering the potential demand.  

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

So BIOSes and PBO/XFR2 weren't ready for launch really? This is great. Thought the chips weren't going to boost very far at all this gen.

My Rigs | CPU: Ryzen 9 5900X | Motherboard: ASRock X570 Taichi | CPU Cooler: NZXT Kraken X62 | GPU: AMD Radeon Powercolor 7800XT Hellhound | RAM: 32GB of G.Skill Trident Z Neo @3600MHz | PSU: EVGA SuperNova 750W G+ | Case: Fractal Design Define R6 USB-C TG | SSDs: WD BLACK SN850X 2TB, Samsung 970 EVO 1TB, Samsung 860 EVO 1TB | SSHD: Seagate FireCuda 2TB (Backup) | HDD: Seagate IronWolf 4TB (Backup of Other PCs) | Capture Card: AVerMedia Live Gamer HD 2 | Monitors: AOC G2590PX & Acer XV272U Pbmiiprzx | UPS: APC BR1500GI Back-UPS Pro | Keyboard: Razer BlackWidow Chroma V2 | Mouse: Razer Naga Pro | OS: Windows 10 Pro 64bit

First System: Dell Dimension E521 with AMD Athlon 64 X2 3800+, 3GB DDR2 RAM

 

PSU Tier List          AMD Motherboard Tier List          SSD Tier List

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Stroal said:

No one is taking that into consideration. Look at the i5 9600K, there is literally no point in buying that CPU any more, once you consider you need to buy that cooler, and the fact that the majority of people on that budget will be running a mid tier GPU. There is no point in buying a CPU that is more expensive, trades blows with the 3600, and on top of that, it sucks in multicore workloads. If the CPU budget is $300, there is no justifiable reason, IMO, to go with anything but AMD. You get the same performance, it's cheaper, and you can go all the way up to 16 cores 32 threads on ONE motherboard. That's just insane.

 

 

Exactly. A good point for Intel was that they used higher tier GPU's more effectively. Doesn't make sense to get one now. Plus Ryzen is catching up too. Science Studio managed to get the 12 core to work on an ASRock B350 mobo.

My Rigs | CPU: Ryzen 9 5900X | Motherboard: ASRock X570 Taichi | CPU Cooler: NZXT Kraken X62 | GPU: AMD Radeon Powercolor 7800XT Hellhound | RAM: 32GB of G.Skill Trident Z Neo @3600MHz | PSU: EVGA SuperNova 750W G+ | Case: Fractal Design Define R6 USB-C TG | SSDs: WD BLACK SN850X 2TB, Samsung 970 EVO 1TB, Samsung 860 EVO 1TB | SSHD: Seagate FireCuda 2TB (Backup) | HDD: Seagate IronWolf 4TB (Backup of Other PCs) | Capture Card: AVerMedia Live Gamer HD 2 | Monitors: AOC G2590PX & Acer XV272U Pbmiiprzx | UPS: APC BR1500GI Back-UPS Pro | Keyboard: Razer BlackWidow Chroma V2 | Mouse: Razer Naga Pro | OS: Windows 10 Pro 64bit

First System: Dell Dimension E521 with AMD Athlon 64 X2 3800+, 3GB DDR2 RAM

 

PSU Tier List          AMD Motherboard Tier List          SSD Tier List

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

Not sure, from what I can tell GN review was done with this newer AGESA version. GN states they were told to use bios version FC5 and looking at the Gigabyte website F5e is AGESA 1.0.0.3AB. The Important part is being at least 1.0.0.3.a not 1.0.0.3.

https://www.gigabyte.com/Motherboard/X570-AORUS-MASTER-rev-10#support-dl-bios

 

GN just threw a video up, their 3900X was off by a tiny percentage, their 3600 was fine. However it seems to be varying from BIOS to BIOS, MB to MB and even CPU sample to CPU sample. Anandtech is much more affected for example, (Steve's own words), so basically the whole thing is a giant clusterfuck atm.

 

 

To reply to your prior post.

 

1. The point about the article in the reddit post, (which it sound like you managed to completely miss) is that what they were observing (and was present in GN's 3900 frequency data from the original review), was that only certain specific cores would ever boost up to the full 4.6Ghz. The rest would allways fall short by some 10's of Mhz or more. 

 

2. The point on the power draw isn't that the parts are operating out of spec, but that the differences should be much smaller if the silicon quality is anything like similar between them.

 

3. On the binning thing the two comments, (almost allways paired together), i've been hearing from people are: Server CPU's are allways selected for the maximum efficiency. and: the most efficient silicon is also the silicon that will clock the highest if wound up that far. With the (admittedly rare), specific explanation being that the same things that make a chip power efficient are also the ones that let it clock higher before you have to raise the voltage, and since voltage determines thermal load and low thermal load, (on a given cooling solution), is the key to getting stable higher frequencies that makes the chips with the best power efficiency clock the best.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Albal_156 said:

So BIOSes and PBO/XFR2 weren't ready for launch really? This is great. Thought the chips weren't going to boost very far at all this gen.

Maybe. Might be that the behavior that Tech Jesus was seeing is their optimal state, or it might be that further BIOS improvements allow for more XFR/PBO headroom down the line. Figure that a definitive answer on that front won't be forthcoming for at least the next month if not till after the 3950x launch.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CarlBar said:

3. On the binning thing the two comments, (almost allways paired together), i've been hearing from people are: Server CPU's are allways selected for the maximum efficiency. and: the most efficient silicon is also the silicon that will clock the highest if wound up that far. With the (admittedly rare), specific explanation being that the same things that make a chip power efficient are also the ones that let it clock higher before you have to raise the voltage, and since voltage determines thermal load and low thermal load, (on a given cooling solution), is the key to getting stable higher frequencies that makes the chips with the best power efficiency clock the best.

Actually, dont leaky chips overclock higher on LN2? Like the ASIC quality check in says in gpu-z.

image.png.7a7735eb6652a039b2c344e7afbf92e4.png

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

I expect every RX 5700 XT card to have at least two fans on it. We might even need four fans lol, looking at the temps

Link to comment
Share on other sites

Link to post
Share on other sites

Also why the fuck did AMD go with a blower cooler for the Radeon? Nvidia has two big ass fans on all of their RTX lineup, why did AMD think a small-ass blower cooler would be enough

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, realpetertdm said:

Also why the fuck did AMD go with a blower cooler

Haven't we been asking this for like 6 generations of GPU now?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, CarlBar said:

GN just threw a video up, their 3900X was off by a tiny percentage, their 3600 was fine. However it seems to be varying from BIOS to BIOS, MB to MB and even CPU sample to CPU sample. Anandtech is much more affected for example, (Steve's own words), so basically the whole thing is a giant clusterfuck atm.

Yea Anandtech having to re-run a lot of their tests sucks, interesting Gigabyte was on top of it much sooner and could put out the newer AGESA compared to others. I have not watched the video yet but I am about to. 

 

10 hours ago, CarlBar said:

The point about the article in the reddit post, (which it sound like you managed to completely miss) is that what they were observing (and was present in GN's 3900 frequency data from the original review), was that only certain specific cores would ever boost up to the full 4.6Ghz. The rest would allways fall short by some 10's of Mhz or more. 

Yes and this is from what Steve said the correct behavior. Cores will only be able to boost to 4.6GHz in lighter workloads as in what the other cores are doing matters. A single core won't boost to 4.6GHz if too many of the others are under a high enough load. As for which cores that's a natural trait of the CPU scheduler in the OS, cores can and do get preferenced over others and cores won't boost higher than they need to. Windows is now CCX aware and will try to load cores fully on a single CCX before spilling to the next, in the GN graph there is 3 cores that are able to boost. Tomb Raider also happens to be one of the more CPU demanding games and is able to use a good number of threads.

 

If there is no power budget to allow a core to boost it's just not going to happen. You can also see that when some cores do boost there also tends to be a corresponding drop on other cores further showing the power constraints in play. There is just not enough power to allow 4.6GHz boost in a good number of games, what we need is a frequency plot on a game we know will not use more than 4 cores. Another good test would be to disable 4 cores on the 3900X and run it as if it were a 3800X.

 

As for the actual reddit post and the source review, I take very little credibility on that. Their reasoning for not doing the game benchmarks and immediately concluding that there is a fault without seeking a fix isn't the most professional approach. If you believe as a reviewer there is a fault or bug contact your AMD rep and/or motherboard rep, the information about the revised AEGSA did exist. It's one thing to not suspect there is an issue but another to and then not go get the situation clarified. Further to that the source reviewer was using a Gigabyte motherboard, if GN could get the correct bios for the review then they could of too, both were using Gigabyte.

 

TL;DR GN was using the correct functioning AGESA and does not have the boost clock issue.

 

Quote

our R5 3600 review is not affected by BIOS boosting bugs and our R9 3900X review is not affected for all-core production workloads and is minimally affected for some lightly threaded games


Update: Actually, just retested GTA V on F5C again and mores runs to average results in the BIOS change difference falling to 0.9-1.1%, which is within error. First round might have hiccuped on a run, so that means the difference is even smaller in the one game where it looked meaningful

P.S. From the video comment by Steve, I have still not watched the video yet.

 

From that reddit:

Quote

Midnight (Wednesday) GBT HQ gives us news and according to their tests, the new AGESA code, including NPRP BIOS (BIOS for press) replicated our results in single-core frequencies, BUT, the original BIOS (AGESA 1002, without code introduced NPRP) turbo boost was working well.

With this information, I decided to flash BIOS, the first BIOS released for the X570 AORUS MASTER board and surprise, the boost frequencies were working as they should, even beyond the processor at 4.65 GHz. The WHEA error problem in the PCI Express was still going on, so I kept pressing and trying if the problem was maybe the chipset driver.

Gigabyte had the fix available for a minimum of 3 days before review embargo lift. Therefore it was possible to do the benchmarks without the boost clock issue.

 

10 hours ago, CarlBar said:

The point on the power draw isn't that the parts are operating out of spec, but that the differences should be much smaller if the silicon quality is anything like similar between them.

What differences? What you pointed out is 100% explained by the TDP and PPT difference of those SKUs. Obviously the 3900X is going to be using much more power, it's allowed to and will allow more cores to run at high clocks meaning under all thread load conditions it's going to be using more power than the 3600 and 3700X.

 

On 7/8/2019 at 7:14 PM, CarlBar said:

Whilst the jump from the 3700 to 3900 displays weird behaviour too, (69% power spike for 50% more cores threads, a 5.5% higher base clock and a 4.5% higher boost clock).

What is weird about a higher TDP part using more power? The power increase is pretty much exactly the percentage difference between the PPT of each of them. I don't understand what issue you are trying to point out?

 

On 7/8/2019 at 10:30 PM, leadeater said:

From the 3600 to the 3700 the jump is waaaay lower than it should be, (same base clock, 33% more cores/threads and 4.7% boost clock jump but only 9.8% power usage jump).

They are both 65W TDP 88W PPT parts, it doesn't matter how many extra cores or the difference in base or boost clocks when it comes down to actual loads and attainable clocks. If the load on that 3600 wasn't enough to make it draw 88W then it won't, the load on the 3700X was enough for it to hit the 88W PPT limit therefore it will not use more than that.

 

On 7/8/2019 at 7:14 PM, CarlBar said:

The 3600 may be the worst of the three, but the 3900 doesn't line up with it or the 3700 ethier. the situation doesn't change overmuch if you compare OC to OC values, at that point they're all running at the same clocks but the 3600 way underperforms the other two whilst the 1.35v 3700 is noticeably beating the 1.34v 3900, (66% more power draw on 50% more cores at the same frequency).

Steve mentioned that his 3600 was a really bad sample, not unsurprising for a lower end non X part. We have no idea if that is typical of all 3600's but pretty much pay for what you get type of situation. 3600 is a part that isn't supposed to be hitting higher clocks like the parts above it.

 

The 3600 is using 15W per core for 4.3GHz, 3700X is using 12.825W per core for 4.3GHz and the 3900X is using 14.2W per core for 4.3GHz. The power readings were taken from the EPS cable which is total package power so includes the CCD and IOD power so what I just did is flawed but I'll carry on to explain that impact. Based on that the 3700X is easily the best sample of the lot. However that difference in power can be explain by 1 CCD vs 2 CCD and any extra power overhead required for that. The 3900X requiring 0.01V less vcore should mean it's the best sample, further backing my assumption that the second CCD does have a power overhead implication. You can also see that in the Anandtech core power graphs I posted where the 3900X IOD uses more than the 3700X IOD, there's an extra IF link in use.

 

The very long TL;DR is that if you check what AGESA a reviewer is using i.e. GN and it's the later one then there is no wide issue of boost clocks and while there may still be some slight improvements to be had the performance is well within where it should be and the boost clocks are functioning correctly (margin of error no difference). It's just not that big of a problem or a problem at all Edit: (in a discussion using data from an unaffected source).

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×