Jump to content

Navi/Ryzen 3000 launch Megathread

LukeSavenije
21 minutes ago, leadeater said:

@porina These may be of interest to you.

From Anandtech's writeup? I haven't looked at that in detail yet, as I'm not seeing an obvious relation to what I'm interested in. Very crudely, I'm after sustained transfer bandwidth. I just re-run Prime95 benchmark on the 2600 and 8086k in preparation. The 3600 will drop in where the 2600 is now so will be an easy comparison.

 

I forgot how hilariously bad the FPU was in Zen(+). I got an almost flat line out, compared to a big peak and fall on Intel depending if it was in cache or not. Zen(+) was so slow it was hardly able to make use of the ram I have in there (3400). I predict the 3600 should beat the 8086k for working sizes in the region between 12M and 32M, their cache size difference. If it can beat Intel <12MB depends on actual clocks and IPC. Performance of the cache is rather unimportant as the data is pipelined well, so bandwidth is very much more important than latency. If it has to go off chiplet, IF will probably bottleneck badly given the half bandwidth writes.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Really impressive launch, I hope this shakes the market up as big as we all hoped so things change. Def interested in seeing more about overclocking...derbauer's video was very interesting.

9900K  / Noctua NH-D15S / Z390 Aorus Master / 32GB DDR4 Vengeance Pro 3200Mhz / eVGA 2080 Ti Black Ed / Morpheus II Core / Meshify C / LG 27UK650-W / PS4 Pro / XBox One X

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, xg32 said:

So the 3700x is a 9900k at half the power consumption (115w @4.3) for 330usd (!!). rip intel confirmed. It's possible to downclock to 4.2 (95w) and just use the stock cooler.

 

All of the concern i voiced before launch came true though, the x570 chipset fan will be the first component to die (though enthusiasts should have no problem putting another fan on top of it)

 

Max oc on air is 4.3-? that's lower than the 4.6 expected, but also better ipc than expected.

 

12 core has scheduling issues on games as expected, waiting on the 16 core to build a workstation. The reviews have me hyped for the efficiency/power draw of the 16c part.

 

And ya, Navi sucks.

It's a bit worse for gaming apparently. Intel's clock advantage looms large.

 

I picked up a 3700X anyways though, still more than $100 cheaper than a 9900k and Microcenter was offering $50 off the AMD motherboards.

AMD Ryzen 7 3700X | Thermalright Le Grand Macho RT | ASUS ROG Strix X470-F | 16GB G.Skill Trident Z RGB @3400MHz | EVGA RTX 2080S XC Ultra | EVGA GQ 650 | HP EX920 1TB / Crucial MX500 500GB / Samsung Spinpoint 1TB | Cooler Master H500M

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, melete said:

It's a bit worse for gaming apparently. Intel's clock advantage looms large.

 

I picked up a 3700X anyways though, still more than $100 cheaper than a 9900k and Microcenter was offering $50 off the AMD motherboards.

Somewhat disappointed by the overclocking on zen 2 yet again hitting the limit at 4.2 territory.

 

There are still several game titles out there that are very dependent on the single-threaded clock speed.

 

It's almost a no-brainer though to get ryzen over intel at this point. Especially the ryzen 7 

Link to comment
Share on other sites

Link to post
Share on other sites

Is it just me or not, who think that RX 5700 double kills the RTX 2060 + 2060 Super, oops sorry, 2060 $uper ?

SILVER GLINT

CPU: AMD Ryzen 7 3700X || Motherboard: Gigabyte X570 I Aorus Pro WiFi || Memory: G.Skill Trident Z Neo 3600 MHz || GPU: Sapphire Radeon RX 5700 XT || Storage: Intel 660P Series || PSU: Corsair SF600 Platinum || Case: Phanteks Evolv Shift TG Modded || Cooling: EKWB ZMT Tubing, Velocity Strike RGB, Vector RX 5700 +XT Special Edition, EK-Quantum Kinetic FLT 120 DDC, and EK Fittings || Fans: Noctua NF-F12 (2x), NF-A14, NF-A12x15

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, porina said:

Performance of the cache is rather unimportant as the data is pipelined well, so bandwidth is very much more important than latency. If it has to go off chiplet, IF will probably bottleneck badly given the half bandwidth writes.

The bandwidth through to the memory is slightly lower for read and write but neither is too greatly different so even with the larger FPU load/store and bit width it sounds like Intel may still be faster for you. Intel is 20% higher write bandwidth though, you can see that in the BW graphs in the 64MB end where Intel is just slightly higher.

 

36 minutes ago, porina said:

If it can beat Intel <12MB depends on actual clocks and IPC

I don't understand the BW graphs and each data plot enough to know. There's parts where Zen2 is much faster but there's also parts with Intel is much faster, but it's comparing a 12 core CPU to an 8 core where as you'll be doing 6 vs 6.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, PotatoCanDo! said:

Is it just me or not who think that RX 5700 double kill the RTX 2060 + 2060 Super, oops sorry, 2060 $uper ?

The RX5700 is ahead compared to the RTX2060, but compared to the RTX2060 Super, it depends on what game and resolution. The broken launch drivers could be hurting performance though.

Link to comment
Share on other sites

Link to post
Share on other sites

Please good innovation in the gpu market! 

I live in misery USA. my timezone is central daylight time which is either UTC -5 or -4 because the government hates everyone.

into trains? here's the model railroad thread!

Link to comment
Share on other sites

Link to post
Share on other sites

So we can still recommend the 9900K for gaming ... but is there anything else one could recommend any Intel consumer CPU for right now?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Captain Chaos said:

but is there anything else one could recommend any Intel consumer CPU for right now?

Any usecase that benefits from Intel Quicksync.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Captain Chaos said:

So we can still recommend the 9900K for gaming ... but is there anything else one could recommend any Intel consumer CPU for right now?

Anything else that may benefit from Intel's clockspeed advantage, quite a lot of software for single thread performance, and the iGPU for any professional use where a dGPU isn't needed.

Edit- I'm just saying Intel CPU's aren't useless in anything besides gaming, although I wouldn't recommend Intel over AMD unless you have some very specific usecase which an Intel CPU would be better for some uses.

Link to comment
Share on other sites

Link to post
Share on other sites

looks like their GPUs didn't really live up to the hype people were giving them. they are amazing for AMD but overall nothing special

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

 

22 minutes ago, Captain Chaos said:

So we can still recommend the 9900K for gaming ... but is there anything else one could recommend any Intel consumer CPU for right now?

Some programs benefit from high clock cycles (like Photoshop), so that. IDK. There are probably other things, but generally I'd recommend a Ryzen 3000 now. 

11 minutes ago, Blademaster91 said:

Anything else that may benefit from Intel's clockspeed advantage, quite a lot of software for better single thread performance, and the iGPU for any professional use where a dGPU isn't needed.

If you look at what AMD was able to do with IPC, even clock speed might not be enough to keep them competitive with Ryzen in single core workloads. You'll need to see benchmarks and if I were advising someone, I'd argue they should favor AMD in close situations. 

 

AMD is a clock speed increase (500-800 Mhz) from having the kind of lead Intel had during the FX days...

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, PotatoCanDo! said:

Is it just me or not, who think that RX 5700 double kills the RTX 2060 + 2060 Super, oops sorry, 2060 $uper ?

According to benchmarks, the 5700 beats the normal 2060 pretty easily. The Super? No.

 

The 5700XT? Tit for tat with the 2060 Super depending on the game and overclock. The XT will be a good card for someone wanting 2070/2060 Super performance and wanting to support AMD. For anyone wanting more power, NVIDIA still holds the top end with the 2070 Super/2080/2080 Super/2080Ti.

 

Your use of $uper makes me think you weren't literally asking that question though

Link to comment
Share on other sites

Link to post
Share on other sites

Amd seems to have superior boost technology than intel. Overclocking seems to be largely irrelevant now, particularly counterproductive to gaming.

Does it only apply to x sku? Does the boost only work with certain motherboards?

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, leadeater said:

Well I don't consider 35% chance reasonable to assume though. Compound that with Intel known to have an architecture that overclocks very well and the best fab technology and optimization in the industry (10nm issues doesn't make this not true). A 50/50 chance is one thing, still not all that safe to assume, but I'm not expecting it to be that high given what we have seen with Zen in the past.

 

10 hours ago, leadeater said:

Well if you mean package power limit that is set in the microcode yes, not socket power limit. Socket power limit implies no matter the SKU that is the power limit of the socket, there's a differentiation between socket and package. We know the socket and AM4+ platform families can do more than 140W. My expectation around that 140W value is that it was chosen as that was where the product was most stable and yielded the most viable products. If Zen2 was broadly capable of more than that then I would expect it to be higher, say 150W. But even at 150W that may not be high enough to give a meaningful clock increase for all cores if every Zen2 die could achieve it in the first place.

 

There is a lot of design thought and reasoning that go in to product specs and parameters, AMD isn't about to unnecessarily tie an anchor to itself just to look slightly better in the power draw graphs when everyone knows performance is king. People will, and have, closed their eyes to power draw when the performance is there.

 

Zen2 products are absolutely excellent as they are, we don't need to expect much more from them and to do so is only setting yourself up for disappointment. Hitting high clocks is just some arbitrary goalpost with little meaning behind it, an all core OC of 4.7GHz should yield some good performance gains and while even I consider that to be a though ask for most Zen2 products something much more achievable than 5GHz. Such a huge jump in clocks from one generation is unprecedented in the last 1 to nearly 2 decades and smaller nodes actually start to hurt clocks not improve them due to that resistance increase, a situation only encountered on very small nodes. I know that is a weird thing to think about since in the past node shrinks have meant clock increases but resistance wasn't an issue for those, we have officially tipped the scales but have found ways to balance them.

 

A typical motherboard VRM will be operating in the 90% efficiency range, below that is considered bad and that would happen when you are drawing too little current or way significantly higher currents than optimal. 82% is in the VRM near death from over current draw territory.

 

The package power limit likely isn't as hard limiting at it implies so the CPU actually drawing ~150W isn't unreasonable even if it's only short term or some kind of peak type thing.

 

 

Just a couple of points to make.

 

1. Given the clock speed+core count vs power draw GN showed it's possibble it's all down to unusually low quality silicon being used on the desktop. Everyone speculated this waaay back when they announced the specs last month, but it's still odd they'd need to do this on desktop as they didn't with prior Zen generations.Of course it could also be an IO die issue and not a CPU chiplet issue.

 

2. GN had issues with OC'ing on Navi, HUB so far don't seem to have had issues, not sure how everyone else is doing. It's possibble there's a driver or bios issue for some reviewers with Ryzen too. I wouldn't count on it, but i wouldn't rule it out TBh ethier.

 

1 hour ago, Johnny Who said:

Anyone heard any more of this ?

 

 

See point 2 above, there's a fair amount of uncertainty going around. I wouldn't bet on anything, but i also wouldn't rule it out till a month or two has gone by.

Link to comment
Share on other sites

Link to post
Share on other sites

this is just an amazing situation, AMD capable of fighting back intel and nvidia at same time lol

its feeling quite bad that I got r7 1700 ($310) being beated up by r5 3600 ($200), anyway I could still wait patiently 

 

in LTT review, seems like there is much less point in overclocking...? is that really true

Link to comment
Share on other sites

Link to post
Share on other sites

I'm happy to see competition at the graphics price points.  I wish they were ALSO competing more with the 2070 Super and 2080 price points for a full stack, but AMD does quite well at the 5700/xt price points, regardless of who drove who to what prices.  I look forward to the next Navi up the stack and how that plays out, and to see if NV drops the 2070 Super in price to try and dominate all but the truly low end.

 

As for CPUs.  AMD did great here.  Yes, they did the leap frog by jumping down to 7nm first, but they have also been shown to have better IPC right now, despite not always having quite the same single threaded performance due to raw intel clock speed.  At this point, it really just always makes sense to get an AMD CPU unless it truly is a gaming ONLY pc and you need that last 1% of frame rates while you're also pushing your 2080 TI to the max.

Link to comment
Share on other sites

Link to post
Share on other sites

GN has put up their 3900X video:

 

 

 

Frankly everything they said and showed has just raised more questions about the whole power side of things. Honestly i've got multiple alarm bells ringing on some of this, things shouldn't be this weird and i have to wonder what it's all about. It's going to be interesting to see what happens when people get these on LN2.

Link to comment
Share on other sites

Link to post
Share on other sites

Ok Im gonna ask for a explanation, why is power consumption such a big concern? What problems does high power consumption translate to? I dont know the answer here

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Ravendarat said:

Ok Im gonna ask for a explanation, why is power consumption such a big concern? What problems does high power consumption translate to? I dont know the answer here

We're hitting 1.4v at the 4.3-4.4GHz mark. If Zen 2 can't handle more voltage, then it likely won't clock better (BIOS can affect how well chips overclock by a small extent). We can't really do much overclock testing either, because the chips are locked to not exceed certain wattage ranges.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Ravendarat said:

Ok Im gonna ask for a explanation, why is power consumption such a big concern? What problems does high power consumption translate to? I dont know the answer here

In general, higher power consumption means you're paying for more electricity to run it, and that costs you a bit more money (possibly a negligible amount) over, say, the next 2 - 4 years that you're using the GPU / CPU. Higher power consumption also means that the GPU / CPU is generating more heat, which means overclocks could become unstable or require more cooling, and also means that your GPU / CPU fans will be spinning faster and creating more noise.

 

Those things aren't problems unless they happen to be problems for your particular interests and budget.

 

Also, what @Drak3 said could be more relevant to the specific discussion of why power consumption could be a concern with AMD's new CPUs. I haven't followed that discussion, so I'm not sure.

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, CarlBar said:

Given the clock speed+core count vs power draw GN showed it's possibble it's all down to unusually low quality silicon being used on the desktop.

That was mostly the 3600, he said his 3900X sample given by AMD was much better. Will have to wait for the actual test of it but the review samples were pre-tested and had thermal paste residue on them when he received them which is interesting, in that they were tested before going out kind of way. It doesn't really matter how low the silicon quality is as it still has to meet the product spec and function in the same way as any other sample. PBO might be a different story though, I don't know if there is supposed to be a difference for that product sample to product sample.

 

What's interesting is the power limit is set by the motherboard, not the CPU, when using PBO. Maybe that part isn't function correctly, though PBO only adds up to 200Mhz no matter how high the power limit is raised.

 

Quote

Several motherboard vendors have told us that overclocking headroom is extremely limited on the Ryzen 3000 processors, and that exceeding the boost clocks, or even meeting them, isn't possible for all-core overclocking.

https://www.tomshardware.com/reviews/ryzen-9-3900x-7-3700x-review,6214-4.html

 

Potential bugs aside expecting what is generally known to be an architecture that has clock walls and everyone getting similar results across motherboards is enough to keep expectations low. If the people in the know are saying don't expect it, if current reviews show it, then the assumption should not be it possible to push all cores to these more extreme ends.

 

If improved bios come out that stabilize clocks a bit more, lower the vcore across the boost table, and get these products more around 4.4-4.6 all core then great. However I'm still dubious of this, just as much is it's currently dubious to expect a 9900K to achieve 5.0GHz all core.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Ravendarat said:

Ok Im gonna ask for a explanation, why is power consumption such a big concern? What problems does high power consumption translate to? I dont know the answer here

Power consumption is a consideration for desktops for several primary reasons. The first is power delivery. Power delivery is a factor in regards to which board can actually feed the cpu, and a factor in choosing a power supply. 

 

Second is power consumption directly correlates to heat output. Higher power consumption necessitates a more robust cooling solution and/or stronger fans to move more air. Air cooling may not even be an option beyond a certain point, such is the case with overclocking.

 

Thermals are what primarily dictate how high a cpu can clock, as at a certain point, it's "wall" if you will, voltage must increase so much to achieve higher clocks that it becomes infeasible or impossible to sufficiently cool the cpu. Power delivery from the board is also another limitation to clocks, however, finding a more robust board can allow this to be sidestepped.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×