Jump to content

Navi/Ryzen 3000 launch Megathread

LukeSavenije
Just now, Ravendarat said:

Thanks for the answers guys, makes more sense now. I appreciate it

Thing is, Zodiark and Delicieuxz answers don't apply in this discussion: the 3700X is drawing less than the 2700X. Same coolers are able to be used, the same boards and sockets work just fine, the new boards being overkill.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, melete said:

It's a bit worse for gaming apparently. Intel's clock advantage looms large. 

With a 1300 dollar graphics card at 1080p on ultra. Unless you are a ranked player in a FPS or a fighting game (Most of which are console anyway), the difference is negligible.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Bruman said:

According to benchmarks, the 5700 beats the normal 2060 pretty easily. The Super? No.

 

The 5700XT? Tit for tat with the 2060 Super depending on the game and overclock. The XT will be a good card for someone wanting 2070/2060 Super performance and wanting to support AMD. For anyone wanting more power, NVIDIA still holds the top end with the 2070 Super/2080/2080 Super/2080Ti.

 

Your use of $uper makes me think you weren't literally asking that question though

I mean from the value point of view. If I ever aim on that range, I wont pay $50 extra only for less than 5 fps gain between 5700 vs 2060S.

 

But someone else might want to pay that extra $50 for that (little) gain, and for how little the numbers are, of course depends on the games.

 

I like how Navi stacks on the whole list.

SILVER GLINT

CPU: AMD Ryzen 7 3700X || Motherboard: Gigabyte X570 I Aorus Pro WiFi || Memory: G.Skill Trident Z Neo 3600 MHz || GPU: Sapphire Radeon RX 5700 XT || Storage: Intel 660P Series || PSU: Corsair SF600 Platinum || Case: Phanteks Evolv Shift TG Modded || Cooling: EKWB ZMT Tubing, Velocity Strike RGB, Vector RX 5700 +XT Special Edition, EK-Quantum Kinetic FLT 120 DDC, and EK Fittings || Fans: Noctua NF-F12 (2x), NF-A14, NF-A12x15

Link to comment
Share on other sites

Link to post
Share on other sites

Higher fan noise potentially. And thermal throttling.

 

Anyone seen this?

First of two people to get a 3800X apparently. Don't know where they got it from as no Western reviewers seem to have it at all right now. I highly doubt its real but they said it was and were going to provide proof of it apparently. Plus can you fake information in MSI Afterburner as these are also provided? Weirdly in the benchmark below comparing with the 9900K the names of the games tested can be seen i English and not Korean but in the Ryzen CPU comparison they can't.

 

Seems like they compared it to 9900K as well. Seems to basically trade blows with it in all the tests they did but they didn't do them for very long. Im very doubtful of this but its the only information we have on the 3800X currently it seems. Though they don't show all of the boxes surface area so it could be a 3700X instead but it was going higher clocks than that.

 

https://www.yangcom.co.kr/

Seems to be these guys who have the other one as per the Zadak MOAB Build featured in a video on their youtube channel.

https://www.youtube.com/channel/UCS7e9ieT1-t7ALELe2ubRtg

Here is the video of them showing the Cinebench scores obtained by the 3800X along with some game benchmarks.

 

My Rigs | CPU: Ryzen 9 5900X | Motherboard: ASRock X570 Taichi | CPU Cooler: NZXT Kraken X62 | GPU: AMD Radeon Powercolor 7800XT Hellhound | RAM: 32GB of G.Skill Trident Z Neo @3600MHz | PSU: EVGA SuperNova 750W G+ | Case: Fractal Design Define R6 USB-C TG | SSDs: WD BLACK SN850X 2TB, Samsung 970 EVO 1TB, Samsung 860 EVO 1TB | SSHD: Seagate FireCuda 2TB (Backup) | HDD: Seagate IronWolf 4TB (Backup of Other PCs) | Capture Card: AVerMedia Live Gamer HD 2 | Monitors: AOC G2590PX & Acer XV272U Pbmiiprzx | UPS: APC BR1500GI Back-UPS Pro | Keyboard: Razer BlackWidow Chroma V2 | Mouse: Razer Naga Pro | OS: Windows 10 Pro 64bit

First System: Dell Dimension E521 with AMD Athlon 64 X2 3800+, 3GB DDR2 RAM

 

PSU Tier List          AMD Motherboard Tier List          SSD Tier List

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Drak3 said:

Thing is, Zodiark and Delicieuxz answers don't apply in this discussion: the 3700X is drawing less than the 2700X. Same coolers are able to be used, the same boards and sockets work just fine, the new boards being overkill.

Think you mentioned that the CPUs have a hard wattage limit. Kind of an odd design decision. Pretty sure I only know of Nvidia that pulls these shenanigans (in their gpus).

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Zodiark1593 said:

Think you mentioned that the CPUs have a hard wattage limit. Kind of an odd design decision.

Right now, the speculation is that the soft limit is temporary (and I refer to it as hard and soft because it's supposed to be 140, but it really is 142, give or take .1W).

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, leadeater said:

That was mostly the 3600, he said his 3900X sample given by AMD was much better. Will have to wait for the actual test of it but the review samples were pre-tested and had thermal paste residue on them when he received them which is interesting, in that they were tested before going out kind of way. It doesn't really matter how low the silicon quality is as it still has to meet the product spec and function in the same way as any other sample. PBO might be a different story though, I don't know if there is supposed to be a difference for that product sample to product sample.

 

What's interesting is the power limit is set by the motherboard, not the CPU, when using PBO. Maybe that part isn't function correctly, though PBO only adds up to 200Mhz no matter how high the power limit is raised.

 

https://www.tomshardware.com/reviews/ryzen-9-3900x-7-3700x-review,6214-4.html

 

Potential bugs aside expecting what is generally known to be an architecture that has clock walls and everyone getting similar results across motherboards is enough to keep expectations low. If the people in the know are saying don't expect it, if current reviews show it, then the assumption should not be it possible to push all cores to these more extreme ends.

 

If improved bios come out that stabilize clocks a bit more, lower the vcore across the boost table, and get these products more around 4.4-4.6 all core then great. However I'm still dubious of this, just as much is it's currently dubious to expect a 9900K to achieve 5.0GHz all core.

 

The point i'm getting at with the silicon quality thing is more that it implies there isn't enough average or better silicon to go around. The average quality level heavily influences how high things will clock thats just the way binning works. Also we know from leaks from computex that the 16 core part can hit higher all core than this, (i'm having issues finding the specific source now grrrr). it's clearly not an architecture limitation at these frequencies but that the silicon isn't good enough, (or a mobo/bios/microcode issue, AMD is limiting OC'ing on the 5700 after all and there's buggy drivers going around for Navi to boot).

 

I would also point out GN's frequency plots for the cores on the 3900X do show the same behaviour outlined in the reddit post article. Which seems to lend credence to the claims.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ravenshrike said:

With a 1300 dollar graphics card at 1080p on ultra. Unless you are a ranked player in a FPS or a fighting game (Most of which are console anyway), the difference is negligible.

All fighting games run at 60 fps and no more.

CPU i7 6700 Cooling Cryorig H7 Motherboard MSI H110i Pro AC RAM Kingston HyperX Fury 16GB DDR4 2133 GPU Pulse RX 5700 XT Case Fractal Design Define Mini C Storage Trascend SSD370S 256GB + WD Black 320GB + Sandisk Ultra II 480GB + WD Blue 1TB PSU EVGA GS 550 Display Nixeus Vue24B FreeSync 144 Hz Monitor (VESA mounted) Keyboard Aorus K3 Mechanical Keyboard Mouse Logitech G402 OS Windows 10 Home 64 bit

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Bruman said:

According to benchmarks, the 5700 beats the normal 2060 pretty easily. The Super? No.

 

The 5700XT? Tit for tat with the 2060 Super depending on the game and overclock. The XT will be a good card for someone wanting 2070/2060 Super performance and wanting to support AMD. For anyone wanting more power, NVIDIA still holds the top end with the 2070 Super/2080/2080 Super/2080Ti.

 

Your use of $uper makes me think you weren't literally asking that question though

I don’t know what reviews you’ve seen but they’re not the same as me. The 5700 is 98-101% of the performance of a 2060 Super and the XT 96-98% the performance of the 2070 Super.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, schwellmo92 said:

I don’t know what reviews you’ve seen but they’re not the same as me. The 5700 is 98-101% of the performance of a 2060 Super and the XT 96-98% the performance of the 2070 Super.

This is exactly what I've seen too. Fairly positive reviews overall, and the XT at $400 pretty much kills off both the 2060 Super (as it has a clear enough lead over it) and the 2070 Super. The XT is closer to the 2070 Super than we thought it was originally, and it's $100 less.

 

I mean...who can argue with it when it's hunting down even the 2080ti in Forza Horizon 4.Yes it's heavily amd favored but still, that's madness considering it's a midrange chip.

 

And in GTA V, the Nvidia title to end all Nvidia titles..the 5700 is beating the 2060 at 4k and 1440p. The XT is at the heels of 2060 Super. If that isn't a victory for Navi I don't know what is.

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, schwellmo92 said:

I don’t know what reviews you’ve seen but they’re not the same as me. The 5700 is 98-101% of the performance of a 2060 Super and the XT 96-98% the performance of the 2070 Super.

And the driver for Navi is still juvenile.

We still can expect more gains over it..

SILVER GLINT

CPU: AMD Ryzen 7 3700X || Motherboard: Gigabyte X570 I Aorus Pro WiFi || Memory: G.Skill Trident Z Neo 3600 MHz || GPU: Sapphire Radeon RX 5700 XT || Storage: Intel 660P Series || PSU: Corsair SF600 Platinum || Case: Phanteks Evolv Shift TG Modded || Cooling: EKWB ZMT Tubing, Velocity Strike RGB, Vector RX 5700 +XT Special Edition, EK-Quantum Kinetic FLT 120 DDC, and EK Fittings || Fans: Noctua NF-F12 (2x), NF-A14, NF-A12x15

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CarlBar said:

The point i'm getting at with the silicon quality thing is more that it implies there isn't enough average or better silicon to go around. The average quality level heavily influences how high things will clock thats just the way binning works

Why would such a small die be yielding majority low quality? 7nm might be new but they do make sure it's high performing before going in to mass production. Architecture very highly plays a part in how a CPU clocks, more than the node does.

 

With that default 140W package power limit on a high utilization workload it may not be possible to get more than 4.3GHz because that is a sustained 140W. The game clocks should be higher though, if it's a power limit on blender etc.

 

1 hour ago, CarlBar said:

I would also point out GN's frequency plots for the cores on the 3900X do show the same behaviour outlined in the reddit post article. Which seems to lend credence to the claims.

Steve addressed that behavior as being power limit.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, leadeater said:

Why would such a small die be yielding majority low quality? 7nm might be new but they do make sure it's high performing before going in to mass production. Architecture very highly plays a part in how a CPU clocks, more than the node does.

 

With that default 140W package power limit on a high utilization workload it may not be possible to get more than 4.3GHz because that is a sustained 140W. The game clocks should be higher though, if it's a power limit on blender etc.

 

Steve addressed that behavior as being power limit.

 

Remember the chiplets at least are the same as those being used by the server CPU's, and eventually TR3, and probably also their console offerings. If demand in other segments for silicon has been high enough it's quite plausible that they're having to scrape the barrel so to speak for Zen 2. And yes he architecture plays a role, but it's just a fact of life with silicon chips that better quality silicon will clock higher. If architecture could impose such hard limits LN2 wouldn't provide any benefit ethier, and from derbauers limited testign so far it does.

 

Also steve said nothing about power limits at all in the relevant section, (13:42 to 14:05).

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, CarlBar said:

Remember the chiplets at least are the same as those being used by the server CPU's, and eventually TR3, and probably also their console offerings. If demand in other segments for silicon has been high enough it's quite plausible that they're having to scrape the barrel so to speak for Zen 2.

I know they are the same but that doesn't mean that most are low quality and there is a long persisting misconception that EPYC requires highly binned dies when that is not the case. Most EPYC SKUs are much lower clocks and at those clocks you do not need a high binned die, the power efficiency isn't a problem at low clock. A high bin die at 2.8Ghz can be the exact same vcore and power draw as a low bin die, binning is much more important at the leading performance end where you are on the efficiency curve.

 

The products that actually require highly binned dies are the high clock ones and those are the 3900X, 3950X and any TR parts to come. There will be some EPYC SKUs that will need better binned ones than the rest of the product stack but other than that a functional die would fit almost any SKU across Ryzen, TR and EPYC bar those highly clocked ones.

 

We'll see how it plays out soon when Silicon Lottery is done their testing.

 

21 minutes ago, CarlBar said:

Also steve said nothing about power limits at all in the relevant section, (13:42 to 14:05).

12:59

Quote

In a Blender all core workload the frequency averages about 4.087GHz across all cores. If you are wondering why the frequency isn't hitting the advertised boost of 4.6Ghz, that's because the listed boost frequencies only apply for limited thread loads scenarios, which blender is not.

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, CarlBar said:

If architecture could impose such hard limits LN2 wouldn't provide any benefit ethier, and from derbauers limited testign so far it does.

Refer back to page 4 where I explained XOC and temperature relationship to achievable clocks.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

just checked some stores in Australia, of the main ones i go to only one is lasting any of the 5700 cards and every single one of them is a blower cooler. Can it only be a blower cooler or has no one actually made a "real" cooler for it?

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Arika S said:

just checked some stores in Australia, of the main ones i go to only one is lasting any of the 5700 cards and every single one of them is a blower cooler. Can it only be a blower cooler or has no one actually made a "real" cooler for it?

All blowers here too, only blowers on Asus's website too. Sad day indeed. Bring on the AIB cards.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

If improved bios come out that stabilize clocks a bit more, lower the vcore across the boost table, and get these products more around 4.4-4.6 all core then great. However I'm still dubious of this, just as much is it's currently dubious to expect a 9900K to achieve 5.0GHz all core.

Much of the review info coming out shows 4.3 all core, thb I was hoping 4.5~ was possible on 3000 series. I havent paid much attention to how boost clocks work on ryzen, but it looks like buying an x and leaving it at stock boost settings rather than ocing might give better results for most use cases. 

 

Guess I'll see in a few weeks what people are saying. 

Silent build - You know your pc is too loud when the deaf complain. Windows 98 gaming build, smells like beige

Link to comment
Share on other sites

Link to post
Share on other sites

Would be nice if anyone would test against older generations. All these reviews show Ryzen 3000 is great and all, but how is my rather ancient 5820K at 4.5GHz on all cores stacking up against 3900X in games mostly? I can roughly compare it to 3600X, but mine runs at higher all core clock. If I weight in IPC difference, maybe it's about as fast as 3600X and with same core count?

 

I was already super hyped, but from the looks of it, I wouldn't be gaining that much in games for something that would essentially be a 1000€ upgrade (CPU+MOBO+RAM). And same ultimately applies even to 9900K. Sure, there is improvement, but it's not really that huge of a difference to justify upgrade to 9900K either. My curiosity and wish to own desktop Ryzen is huge and I might do something stupid, but I'll probably save up the money for the graphic card upgrade and go with AMD CPU sometime when Ryzen 5000 series hits the market. At that point clocks, IPC and core count should be so high already 5820K will actually become obsolete. At the moment I don't exactly feel that way. Bummer. Well, at least I got myself a laptop with Ryzen 5 2500U which is very nice...

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

I know they are the same but that doesn't mean that most are low quality and there is a long persisting misconception that EPYC requires highly binned dies when that is not the case. Most EPYC SKUs are much lower clocks and at those clocks you do not need a high binned die, the power efficiency isn't a problem at low clock. A high bin die at 2.8Ghz can be the exact same vcore and power draw as a low bin die, binning is much more important at the leading performance end where you are on the efficiency curve.

 

The products that actually require highly binned dies are the high clock ones and those are the 3900X, 3950X and any TR parts to come. There will be some EPYC SKUs that will need better binned ones than the rest of the product stack but other than that a functional die would fit almost any SKU across Ryzen, TR and EPYC bar those highly clocked ones.

 

We'll see how it plays out soon when Silicon Lottery is done their testing.

 

12:59

 

 

 

Erm coming athis a bit reversed.

 

1. And the bit you quoted from 12:59 is relevant to the reddit article how? The reddit article is explicitly about how it boosts to 4.6Ghz in workloads where it can do that, i.e lightly threaded. So how and why it boosts in a heavy all core load isn't especially relevant information to that discussion.

 

2. What your saying about binning contradicts literally everything i've seen everyone else ever say on the subject...

 

3. This isn't just about how they OC but the stock power draw. if you go back to the chart from GN 3600 video where seeing stock draws of:

 

3600: 79.2

3700: 87

3900:147.6

 

From the 3600 to the 3700 the jump is waaaay lower than it should be, (same base clock, 33% more cores/threads and 4.7% boost clock jump but only 9.8% power usage jump).

 

Whilst the jump from the 3700 to 3900 displays weird behaviour too, (69% power spike for 50% more cores threads, a 5.5% higher base clock and a 4.5% higher boost clock).

 

The 3600 may be the worst of the three, but the 3900 doesn't line up with it or the 3700 ethier. the situation doesn't change overmuch if you compare OC to OC values, at that point they're all running at the same clocks but the 3600 way underperforms the other two whilst the 1.35v 3700 is noticeably beating the 1.34v 3900, (66% more power draw on 50% more cores at the same frequency).

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

The bandwidth through to the memory is slightly lower for read and write but neither is too greatly different so even with the larger FPU load/store and bit width it sounds like Intel may still be faster for you. Intel is 20% higher write bandwidth though, you can see that in the BW graphs in the 64MB end where Intel is just slightly higher.

In my use cases there are two requirements:

1, a lot of FP performance in cores, which should be good in Zen 2 with the upgraded FP units

2a, enough ram bandwidth to feed the cores

2b, enough cache to not need ram bandwidth

 

2a is the historic condition. 2b likely applies to Zen2 with the enlarged L3 cache. I know users on another forum who do bigger work, and I fear neither case will apply to them given the design choices.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Well I am not upset, it seems the 3600 at A$315 equals the 9600 (stock) in gaming performance for $75 less and better than it for workloads.   Given I only game at 1080 it looks like my next CPU will be the 3600. 

 

It would have been nice to see Gen2 throw Intel of the top of the charts more often but, they certainly have enough wins to be considered a crown product.

 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, mr moose said:

It would have been nice to see Gen2 throw Intel of the top of the charts more often but, they certainly have enough wins to be considered a crown product.

To be honest, even this showing is quite strong for me.

 

Did AMD kick the pants off Intel? Well, not entirely, but it made further gains at its strongest attribute and also nibbled at the difference between it and the equivalent Intel competition.

 

All-in-all, I think a price-drop from Intel might be on the cards, given that the only pedestals they can really stand on right now are clockspeeds (plus overclocking) and raw gaming performance (even that is becoming less significant).

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×