Jump to content

I had given up on AMD… until today - Ryzen 9 3900X & Ryzen 7 3700X Review

AMD is launching ALL of their third-gen Ryzen CPUs today – We’ve got our hands on two of the best to see if they really can take the fight to Intel’s doorstep: Gaming.

 

 

Buy a Ryzen 9 3900X:
On Amazon: TBD
On Newegg: https://lmg.gg/8KV5T

 

Buy a Ryzen 7 3700X:
On Amazon: TBD
On Newegg: https://lmg.gg/8KV5j

 

Buy an ASUS Crosshair VIII Hero (Wi-Fi)
On Amazon: TBD
On Newegg: https://lmg.gg/8KV5N

Emily @ LINUS MEDIA GROUP                                  

congratulations on breaking absolutely zero stereotypes - @cs_deathmatch

Link to comment
Share on other sites

Link to post
Share on other sites

Holy shit. This is downright amazing. Can't wait for WAN show!

I once gave Luke and Linus pizza.

Proud member of the ITX club.

**SCRAPYARD WARS!!!!**

#BringBackLuke

Link to comment
Share on other sites

Link to post
Share on other sites

I can't wait for R5 3600 reviews.

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

its been awhile since I heard someone use "closelier" lol

mY sYsTeM iS Not pErfoRmInG aS gOOd As I sAW oN yOuTuBe. WhA t IS a GoOd FaN CuRVe??!!? wHat aRe tEh GoOd OvERclok SeTTinGS FoR My CaRd??  HoW CaN I foRcE my GpU to uSe 1o0%? BuT WiLL i HaVE Bo0tllEnEcKs? RyZEN dOeS NoT peRfORm BetTer wItH HiGhER sPEED RaM!!dId i WiN teH SiLiCON LotTerrYyOu ShoUlD dEsHrOuD uR GPUmy SYstEm iS UNDerPerforMiNg iN WarzONEcan mY Pc Run WiNdOwS 11 ?woUld BaKInG MY GRaPHics card fIX it? MultimETeR TeSTiNG!! aMd'S GpU DrIvErS aRe as goOD aS NviDia's YOU SHoUlD oVERCloCk yOUR ramS To 5000C18

 

Link to comment
Share on other sites

Link to post
Share on other sites

Am I just being blind or there's no core OC test?

CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: HP EX920 1TB PCIe x4 M.2 SSD + Crucial MX500 1TB 2.5" SATA SSD, 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally, 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172) left outside Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to comment
Share on other sites

Link to post
Share on other sites

Is the new Ryzen Master Utility available for Ryzen 2000? And if yes, where can I find it? 
Thanks for help

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Jurrunio said:

Am I just being blind or there's no core OC test?

There isn't one - yet. As Linus mentioned in the vid, we're planning a full OC video as a follow-up.

Emily @ LINUS MEDIA GROUP                                  

congratulations on breaking absolutely zero stereotypes - @cs_deathmatch

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Jurrunio said:

Am I just being blind or there's no core OC test?

Nah, I don't think they ever mentioned a oc ryzen 3k ever for testing except precision boost. 

Link to comment
Share on other sites

Link to post
Share on other sites

Cool vid, liked the intro gag with Anthony...

Switching back and forth between Linus doing voice over was annoying though, just because of the difference in the audio and how he sounded. I'm guessing the video was a bit rushed and there was a lot that needed to be added after scripting/filming?

CPU: Intel i7 6700k  | Motherboard: Gigabyte Z170x Gaming 5 | RAM: 2x16GB 3000MHz Corsair Vengeance LPX | GPU: Gigabyte Aorus GTX 1080ti | PSU: Corsair RM750x (2018) | Case: BeQuiet SilentBase 800 | Cooler: Arctic Freezer 34 eSports | SSD: Samsung 970 Evo 500GB + Samsung 840 500GB + Crucial MX500 2TB | Monitor: Acer Predator XB271HU + Samsung BX2450

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Spotty said:

Cool vid, liked the intro gag with Anthony...

Switching back and forth between Linus doing voice over was annoying though, just because of the difference in the audio and how he sounded. I'm guessing the video was a bit rushed and there was a lot that needed to be added after scripting/filming?

Without getting into too much detail, there was a last-minute firmware update that changed our performance numbers... On Friday afternoon. Yeah. The team put in a lot of extra work to get this one done on time.

Emily @ LINUS MEDIA GROUP                                  

congratulations on breaking absolutely zero stereotypes - @cs_deathmatch

Link to comment
Share on other sites

Link to post
Share on other sites

Does anybody know whether they used Windows 10 version 1903 which includes scheduler improvements for Zen? If they didn't, then the story is just getting better and better

Computer Science student proficient in OSX, Linux and Windows

(Dell Inspiron 7570): i5-8550u | Geforce 940MX | 8GB RAM | 128GB SSD | 15.6" 1920x1080 ScreenWindows 10 |

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, DamnTarget said:

Does anybody know whether they used Windows 10 version 1903 which includes scheduler improvements for Zen? If they didn't, then the story is just getting better and better

We did use the latest updates as of July 1, and we used AMD's chipset drivers, so it should have the Zen scheduler update. But I don't think that takes the CCDs into account, which IMO is likely why gaming performance isn't what it should be right now. The Battlefield OBS test clued me into that.

Emily @ LINUS MEDIA GROUP                                  

congratulations on breaking absolutely zero stereotypes - @cs_deathmatch

Link to comment
Share on other sites

Link to post
Share on other sites

I've watched it, to check if I should buy 3.gen Rysen or 2.gen Threadripper.

But the issue was, that you benchmark the new one's with R20 and the "old" ones with R15.

I found a side "cpu monkey" where someone already benchmarked the old ones with R20.

Your Result:

Ryzen 7 2700X - 235/4029

Ryzen 7 3700X - 502/4875

Rysen 9 3900X - 516/7253

Their result:

Ryzen 5 3600 - 478/3509

Threadripper 1920X - 406/5326

Threadripper 1950X - 411/6731

Threadripper 2920X - 439/5843

Threadripper 2950X - 449/7003

Threadripper 2990WX - 398/11463

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zENjA said:

I've watched it, to check if I should buy 3.gen Rysen or 2.gen Threadripper.

But the issue was, that you benchmark the new one's with R20 and the "old" ones with R15.

I found a side "cpu monkey" where someone already benchmarked the old ones with R20.

Your Result:

Ryzen 7 2700X - 235/4029

Ryzen 7 3700X - 502/4875

Rysen 9 3900X - 516/7253

Their result:

Ryzen 5 3600 - 478/3509

Threadripper 1920X - 406/5326

Threadripper 1950X - 411/6731

Threadripper 2920X - 439/5843

Threadripper 2950X - 449/7003

Threadripper 2990WX - 398/11463

I hadn't yet done the comparison to Threadripper but hoo boy 3900X >= 2950X is a thing to behold. TR still has all those PCI-E lanes though.

Emily @ LINUS MEDIA GROUP                                  

congratulations on breaking absolutely zero stereotypes - @cs_deathmatch

Link to comment
Share on other sites

Link to post
Share on other sites

Oh boy would I love to build a PC using Ryzen 3rd gen

Link to comment
Share on other sites

Link to post
Share on other sites

@GabenJr yes this was also my thought and yes, for a server / workstation system with lot's of cards, yes.

But for some storage and networking tasks like 200G/2x100G like ConnectX-5 (or newer) cloud take a boost of PCIe gen.4 by going from x16 to x8 or x4 to have space for x4 GPU's in a compute note.

Rysen 9 - 16x PCIe gen.4

Threadripper 2990WX  - 64x PCIe gen.3

Epyc - 128x PCIe Gen.3

I had a Dell R7425 with 1x 7401 with 1x GPU + Fiberchanel + 100G eth + PCIe NVMe SSDs lend from them for Hannover Messe. It was awesom to see all cards performing at full speed without lane sharing. Ok, cinebench R15 was crap on them, because I only could run windows 7 in a VM because of platform support... 110/3343 (ESXi 6.0 failed to boost right)

Link to comment
Share on other sites

Link to post
Share on other sites

When will there reviews for the 3800X?

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, williamcll said:

When will there reviews for the 3800X?

As far as I know AMD didn't send any of those out to reviewers, so they basically have to buy those themselves (which means they couldn't get them before launch) and then test them.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, williamcll said:

When will there reviews for the 3800X?

When Wendel gets back from Microcenter with a 3800X.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

This is great!

AMD destroyed Intel in basically everything but gaming.

Let's see how long that'll last - after all it's only 6-7 months until Ice Lake is supposed to be released.

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

These numbers are hard to argue with.
AMD has truly made some radical improvement.


Somehow it feels like I haven't scratched the surface of things I could come to think of in regards to the changes AMD has made, and potential improvements among other things, but this post is already gigantic...
 

So AMD splitting the CPU into 3 chips is honestly not a dumb idea. It does increase tooling costs. But at the same time, the IO chip can likely be used on a slew of different skews, so it will practically pay for itself through improved yield alone. (Ie, if a chip has an IO problem, one only throws out a broken IO chip, not a hole CPU.)
Samt thing applies to splitting the cores onto two chips. (IIRC Intel used to split their CPU into two symmetrical chips back in the day, for that very reason.) This would give AMD an advantage in terms of manufacturing cost, and thereby give better performance for the price.


The next advantage of spitting the CPU into three chips is that the thermal density of the CPU gets lower. Since the three chips have spaces between them, spacing out the heat over a larger area under the IHS, and this should in turn mean that it should be easier to cool. Though, theoretically speaking that is, how much it actually matters in practice is a different question. The effect might be next to negligeabel, or it could be huge. (I haven't tested.)
 

Though, added latency when communicating between the chips will have some downsides, same thing applies to multi socketed systems as well where the effect is typically even more noticeable, not only due to added latency, but also due to yet lower bandwidths. (But there most OSes has a bit better understanding to not just randomly toss threads onto cores. OSes could just implement support for core groups within a CPU... (Though, then the CPU needs to have some way of telling the OS what core belongs to what group. (Most CPUs can give the OS information of what cores shares L2, among other things. So the information is there, the OS just needs to use it.)))

Then I do have to wonder, did AMD implement their up to 72 MB L3 ("game cache") as a method of circumnavigating memory bandwidth problems? (The AM4 socket only has 2 DDR4 memory channels after all. With enough cores and clock speeds, this will become a bottleneck.)
Okay, adding more cache does work to a degree, but the returns are rapidly diminishing, and not to mention that applications dealing with larger datasets likely won't care about that cache due to only accessing the data for a very short time every few billion or more cycles. (Not a typical application for the everyday user though. So as long as one doesn't work with huge datasets, then AMD has a good offer. (And if one works with huge datasets, then one goes to AMD and buys an EPYC with 8 memory channels instead... (Intel's 3647 platform only has 6 memory channels, unless one needs an in hardware AES encryption accelerator.)))

 

Then we should also take the cost of SRAM into consideration. (72 MB of single access SRAM needs roughly 2.4 billion transistors, not including any needed glue logic. (Dual access SRAM needs roughly 3.6 billion transistors.) Then there is the actual caching system, address translation tables, etc... It is a lot of transistors, though modern CPUs is typically 60+% cache.) But the up to silly amount of L3 ("game cache") is likely going to be advantageous in some applications, though it likely isn't cheap in manufacturing. But if AMD uses it to space out their cores on the chip, they could in theory lower their power density on the chip level as well. Making cooling easier yet again on paper. (But this is likely not going to be all that noticeable...) Though, I am going to stop talking about cache, because I haven't even scratched the surface yet... (Since I could go into among other things, speculative prefetching due to branches in execution and how the efficiency of that effects the effective memory bandwidth of a system under a given workload. (This "efficiency" is on paper not to hard to measure.))

In regards to the chipset fan, I would suspect that a decently large heatsink with sufficient fins would actually be able to keep it cool. Downside is that it likely is going to need a bit more machining work then a simple stick on heatsink and a fan holder. After all, it is just 14 watts, it is a bit on the high side, but nothing that a cheap aluminium extrusion couldn't handle. (Not to mention that a large block of metal would likely have enough thermal mass to handle bursts of activity. The chip doesn't after all consume 14 watts constantly, but rather peak. And even a small block of aluminium has a lot of thermal mass...) Maybe a project @AlexTheGreatish could dive into?

Hope this text wall is informative to the one's willing to read through it...

Link to comment
Share on other sites

Link to post
Share on other sites

I'm very interested to know if AMD has figured out a way to contend with NUMA latency that has plagued them in previous generations.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, B4DC45U4L said:

I'm very interested to know if AMD has figured out a way to contend with NUMA latency that has plagued them in previous generations.

To a large degree the latency between NUMA nodes is down to a few factors.

  • The buss clock speed. Ie how many times a second one sends data, if one only does it at 0.5 GHz, then we can expect up to 2 ns right there.
  • How many clock cycles is needed to send the data. More = more latency. But a more parallel buss typically fixes this issue. (or a stupendously high bitrate)
  • Transferring data between clock domains. This as a very rough rule of thumb will add 1-2 clock cycles of latency of the slower clock. If the clock domains are phase locked and have an integer ratio, one can transfer without this latency. But that though has the downside that all resources of interest are tied together in terms of clock speed...

But since AMD has doubled the bit rate of their infinity fabric, then the latency should get smaller. Both due to data packages being able to be sent twice as frequently, and that the latency associated with buffering data over a clock domain boundary is also reduced. (If this latency halved as well is a better question, since if the infinity fabric clock speed is higher then the core clock speed, then we are held back by the core now adding the latency, though it is still an improvement.)

Link to comment
Share on other sites

Link to post
Share on other sites

Please do a full suite of memory tests. 3200-4400 ( XMP ), and see the difference they make in gaming.

Would be interesting to see how 4400Mhz at 1:2 IF performs against 3600 or 3733Mhz at 1:1.

 

Also @GabenJr why are no reviewers commenting on the audio levels of the X570 chipset fan? Can you load some some PCIe SSDs, and see how much it screams please?
 

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×