Jump to content

Navi/Ryzen 3000 launch Megathread

LukeSavenije
13 minutes ago, leadeater said:

Yea I get that however unless you are right on that very low edge the efficiency comes on very quickly. You're right 70A per phase is very high and at 140W means only 10% utilization on a 14 phase design which would be easily around that efficiency cliff. That itself could be a reason not to buy a massive VRM board if only going with a lower SKU, wonder how that effects reviews too (total system draw comparisons)

Spoiler

graph_TDA21470_IR35411.JPG_623303886.jpg

This is for 5 phases so it's about 20A per phase or 35% usage for a 70A though I do see the minimum efficiency for these phases is VERY high, doesnt even touch 80% so hmmm. Possibly related to 1.82V, you would never run that IRL.

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

Wanna know why blower coolers suck?

 

They "blow"...........I'm so terrible at this

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, S w a t s o n said:

This is for 5 phases so it's about 20A per phase or 35% usage for a 70A though I do see the minimum efficiency for these phases is VERY high, doesnt even touch 70% so hm. Possibly related to 1.82V, you would never run that IRL.

I had a look at the IR3555 datasheet and it didn't have any graphs at all. This is a general trait of a semi conductor, they very quickly go from non conducting to conducting, how that relates to efficiency I'm not totally sure but the lower range is expected to have those extremes because they are semiconductors.

 

Doesn't really matter, you really have to be at the extreme low end of current draw to have to care and the Gigabyte X570 Extreme for example is all smart power stages with current balancing and will turn off phases when not needed. Using all smart power stages seems like a good, expensive, trade off to pack in super high end VRM design to keep people happy that it's good while also not kneecapping your efficiency by giving in to those demands of a high end VRM design by those that may not understand or realize that is actually not a good idea typically. It is something I am going to look at, probably will only buy a all smart power stage board because I don't want my CPU at 35W idle but yet still burning 70W-80W.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

I had a look at the IR3555 datasheet and it didn't have any graphs at all. This is a general trait of a semi conductor, they very quickly go from non conducting to conducting, how that relates to efficiency I'm not totally sure but the lower range is expected to have those extremes because they are semiconductors.

 

Doesn't really matter, you really have to be at the extreme low end of current draw to have to care and the Gigabyte X570 Extreme for example is all smart power stages with current balancing and will turn off phases when not needed. Using all smart power stages seems like a good, expensive, trade off to pack in super high end VRM design to keep people happy that it's good while also not kneecapping your efficiency by giving in to those demands of a high end VRM design by those that may not understand or realize that is actually not a good idea typically. It is something I am going to look at, probably will only buy a all smart power stage board because I don't want my CPU at 35W idle but yet still burning 70W-80W.

I mean I just linked you a 70A powerstage, the peak is further down but the sweet spot may as well be the lower 60% so you are in the end right that it's more efficient than I expected

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

AMD Radeon VII recent buyers have to be pissed.   Wowww

AMD 1700X// Gigabyte Aorus Gaming 5 // Geil DDR4 3200 // Gigabyte Aorus 1080//  Samsung 128GB 840 Pro SSD(OS)// Seagate Barracuda 1TB(games&docs)//

 

Cool Master Stacker 935//  LG Ultrawide 34UM95//  XSPC 360 Liquid Cooling//

 

 

Corsair K70 Mechanical Keyboard//  Corsair M65RGB//  Xonar DX//  Audio Technica ATH A900X// Blue Snowball Mic(Tandy POP filter)  

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, S w a t s o n said:

I mean I just linked you a 70A powerstage, the peak is further down but the sweet spot may as well be the lower 60% so you are in the end right that it's more efficient than I expected

I was still typing that and checking the Gigabyte video, not one to waste what I already typed ?. Plus that graph resolution is shit if you only care about that lower bit, which why would you normally.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Drak3 said:

 

Love the fact that the guy moving goal posts liked this, thinking I was talking about Lead moving the post.

I just want to clarify I got the joke you were making, big oof
 

1 hour ago, S w a t s o n said:

>Goalposts moved themselves I guess

 

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm blown away by just how much better AMD's lineup is compared to Intel's. VASTLY better in professional work and averages out to be around the same in gaming. There won't be a reason to buy any intel chip for the next few months/years.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Deedot78 said:

AMD Radeon VII recent buyers have to be pissed.   Wowww

Unlikely. R7 is mostly being bought for Mining. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Deedot78 said:

AMD Radeon VII recent buyers have to be pissed.   Wowww

Same applies to people who bought RTX 2070 recently...

Link to comment
Share on other sites

Link to post
Share on other sites

But will it X370?

Resident Mozilla Shill.   Typed on my Ortholinear JJ40 custom keyboard
               __     I am the ASCIIDino.
              / _)
     _.----._/ /      If you can see me you 
    /         /       must put me in your 
 __/ (  | (  |        signature for 24 hours.
/__.-'|_|--|_|        
Link to comment
Share on other sites

Link to post
Share on other sites

Haven't read every single post in the thread, but found two parts of interest in other reviews.

 

One is that the IF viewed from core chiplet only has half the write bandwidth compared to reads. This limits aggregate write bandwidth for single chiplet models, implicitly would also impact chiplet to chiplet transfers on 2 chiplet models. Probably not a problem for consumer workloads, but will impact some compute cases. AMD specifically said this was an optimisation for "client" workloads, so it does raise the question if the server models have the full bandwidth in both directions as their charts showed.

 

The other part is clarification on the IF decoupling, since they only ever talked about it as a ratio it made it sound like you lost half your IF bandwidth if it kicked in. Fortunately this is NOT the case, as IF is fixed at 1800 MHz (unless you adjust it manually). This would instead operate as a soft ceiling on bandwidth until IF saturates if ram goes fast enough. This wouldn't happen when run synchronously since ram doesn't sustain its peak performance and averages somewhat lower. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Bruman said:

Regarding AMD lowering prices in response to NVIDIA SUPER cards.

"AMDs playing 4D autochess" - TechLinked

"This wasn't some 5D chess mental gymnastics, this was a requirement." - GamersNexus

 

Oooo shots fired

 

To be honest, it looked fairly obvious from the onset.  It'sjustbusiness needs to be a hashtag.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Why arent there 3800x reviews available anywhere? Since the CPU's amd launched seem to work at the best possible clocks out of the box i would like to know if there is anything to suggest the better binning for 3800x compared to 3700x.

Also, do you think if it's worth it to go for X570 over B450 chipset? I already ordered 2x16 sticks micron e-die sticks, ballistix 3200 cl 16 and a big tower cooler so i just need to decide what to go for. Do you guys think X570 is more "future proof"?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Billy_Wellington said:

Why arent there 3800x reviews available anywhere? Since the CPU's amd launched seem to work at the best possible clocks out of the box i would like to know if there is anything to suggest the better binning for 3800x compared to 3700x.

Also, do you think if it's worth it to go for X570 over B450 chipset? I already ordered 2x16 sticks micron e-die sticks, ballistix 3200 cl 16 and a big tower cooler so i just need to decide what to go for. Do you guys think X570 is more "future proof"?

Seems like AMD sent reviewers the 3700X and 3900X as representatives of two big price points for the lineup. I'm guessing we'll see more of the line in the coming weeks as we usually do

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, mr moose said:

 

To be honest, it looked fairly obvious from the onset.  It'sjustbusiness needs to be a hashtag.

Yeah it wasn't any chess match or AMD trying to trick Nvidia, it's just AMD realizing they needed to lower prices after seeing the Super cards to remain competitive. Even with the price reduction I'd have to guess AMD is still making a decent margin as the 7nm Navi chip is much cheaper to make than Nvidia's Turing.

4 hours ago, Deedot78 said:

AMD Radeon VII recent buyers have to be pissed.   Wowww

The Radeon VII owners don't need to upgrade, neither do those with a 1080Ti or RTX2070. And IMO, i'd rather have the Radeon VII with the extra VRAM and the better cooler, the 5700X is hot and loud with the awful blower cooler.

Link to comment
Share on other sites

Link to post
Share on other sites

I'll tell you what, Navi is starting to impress and even though the value still isn't great... both these reviews have really made me feel a lot better about Navi. Super is irrelevant now.

 

What it certainly doesn't make me feel good about though is PAYING £650 FOR A RADEON VII IN MARCH. How foolish of me....

 

Ryzen 3000 might have got the job done and more, but I'm starting to warm more and more to Navi too. I'm definitely more impressed than I thought I would be.

Link to comment
Share on other sites

Link to post
Share on other sites

So the 3700x is a 9900k at half the power consumption (115w @4.3) for 330usd (!!). rip intel confirmed. It's possible to downclock to 4.2 (95w) and just use the stock cooler.

 

All of the concern i voiced before launch came true though, the x570 chipset fan will be the first component to die (though enthusiasts should have no problem putting another fan on top of it)

 

Max oc on air is 4.3-? that's lower than the 4.6 expected, but also better ipc than expected.

 

12 core has scheduling issues on games as expected, waiting on the 16 core to build a workstation. The reviews have me hyped for the efficiency/power draw of the 16c part.

 

And ya, Navi sucks.

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, xg32 said:

also better ipc than expected.

Not really. The math using the numbers AMD gave us when they announced Zen 2 and Ryzen 3000 lines up perfectly with what we actually got.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

 
 
 
10 minutes ago, MeatFeastMan said:

Super is irrelevant now.

The 5700XT is tit for tat with the 2070 and 2060 SUPER, but comes nowhere close to the 2070 SUPER and above. I wouldn't call the irrelevant.

Link to comment
Share on other sites

Link to post
Share on other sites

@porina These may be of interest to you.

 

Quote

Whilst both AMD and Intel’s MLP ability in the L2 are somewhat the same and reach 12 – this is because we’re saturating the bandwidth of the cache in this region and we just can’t go any faster via more accesses. In the L3 region however we see big differences between the two: While Intel starts off with around 20 accesses at the L3 with a 14-15x speedup, the TLBs and supporting core structures aren’t able to sustain this properly over the whole L3 as it’s having to access other L3 slices on the chip.

 

AMD’s implementation however seems to be able to handle over 32 accesses with an extremely robust 23x speedup. This advantage actually continues on to the DRAM region where we still see speed-ups up to 32 accesses, while Intel peaks at 16.

Spoiler

mlp3900.png

 

mlp9900.png

 

Quote

Deeper into the DRAM regions, however we see that AMD is still lagging behind Intel when it comes to memory controller efficiency, so while the 3900X improves copy bandwidth from 19.2GB/s to 21GB/s, it still remains behind the 9900K’s 22.9GB/s. The store bandwidth (write bandwidth) to memory is also a tad lower on the AMD parts as the 3900X reaches 14.5GB/s versus Intel’s 18GB/s

Spoiler

bw3900.png

 

bw9900.png

 

 

 

The SPECfp figures seem to mirror the above on actual performance. It's a bit disappointing they didn't do more of their FP/AVX tests.

Link to comment
Share on other sites

Link to post
Share on other sites

I like this fact shown by TPU

 

https://www.techpowerup.com/review/amd-ryzen-3900x-3700x-tested-on-x470/6.html

Quote

With this data, and the data from our PCIe gen 4.0 scaling article, we are happy to report that you can save yourself anywhere between $70 to $150 by choosing an X470 motherboard over an X570 variant. There are no tangible performance gains to be had as there is no apparent overclocking headroom increase with our review cooling solution and memory kit (which uses Samsung B-die), and certainly nothing is to be gained from PCIe gen 4.0 for now. You even get the added benefit of a motherboard chipset that truly runs Cool & Quiet.

 

SILVER GLINT

CPU: AMD Ryzen 7 3700X || Motherboard: Gigabyte X570 I Aorus Pro WiFi || Memory: G.Skill Trident Z Neo 3600 MHz || GPU: Sapphire Radeon RX 5700 XT || Storage: Intel 660P Series || PSU: Corsair SF600 Platinum || Case: Phanteks Evolv Shift TG Modded || Cooling: EKWB ZMT Tubing, Velocity Strike RGB, Vector RX 5700 +XT Special Edition, EK-Quantum Kinetic FLT 120 DDC, and EK Fittings || Fans: Noctua NF-F12 (2x), NF-A14, NF-A12x15

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bruman said:

Regarding AMD lowering prices in response to NVIDIA SUPER cards.

"AMDs playing 4D autochess" - TechLinked

"This wasn't some 5D chess mental gymnastics, this was a requirement." - GamersNexus

 

Oooo shots fired

The Techlinked video title has been changed now. I thought it was humorous, as it played off a post in an LTT discussion.

 

There isn't a dichotomy between having to lower prices due to market conditions and having predicted the situation and so priced higher to avoid being forced to lower prices even further. There is a calculation involved - a very simple, business-minded one.

 

That AMD was undercut by Nvidia's refreshes and so had to lower their prices isn't an argument that AMD didn't price higher to allow for a price-reduction later. That they would be required to lower prices to stay competitive after Nvidia reveal their refreshes at a price-point meant to undercut AMD is the reason why AMD would have priced higher initially. The reason why something happened isn't an argument against it having happened, and framing it as such is like arguing a thing's truth means its invalidation.

 

Steve's a bright, technically-minded person, and so framing what is about the most rudimentary of calculations as mental gymnastics and splitting it into a false dichotomy is a bit weird for him.

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×