Jump to content

5800X3D Can be overclocked (BCLK)

TrigrH

Summary

The soon to release AMD Ryzen™ 7 5800X3D processor was meant to be locked from overclocking:

This lock has been bypassed before the CPU has officially launched.

FQMIQvGVcAEoSKC?format=png&name=900x900

 

Quotes

Quote

image.thumb.png.0828efc5a8fc389b20665d2c234d80d2.png

  

 

My thoughts

If AMD wanted to set a max voltage, they should of simply done that. No reason lock the entire CPU down.

 

Sources

Link to comment
Share on other sites

Link to post
Share on other sites

Cool but useless.

 

At least Intel has the feature to BCLK OC only CPU and RAM while you affect everything when clocking Ryzen this way, not worth the hassle for daily use if you value your storage.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, TrigrH said:

If AMD wanted to set a max voltage, they should of simply done that

You know that can be bypassed by just voltmodding the cpu vcore

Link to comment
Share on other sites

Link to post
Share on other sites

People actually overclock Ryzen CPU's? LOL? Unless you're exclusively doing ONLY multithreaded workloads, overclocking is entirely pointless because of idiotically aggressive temperature curve and even more idiotic clock stretching. I can get my 5800X to run at 5 GHz. Except not really. It'll show 5GHz in all monitoring tools, but benchmark results will ALWAYS be the same at best, but most often even worse than at stock.

 

Only way to give it any kind of meaningful boost is to adjust voltage curve, which just generally makes it run cooler and as a result actually boost higher without strecthing the clocks. And that's it.

Link to comment
Share on other sites

Link to post
Share on other sites

Hm interesting and neat if say it could match the general vanilla SKU clocks. Then again it may not show much gain.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

Isn't BCLK-overclocking super unstable? IIRC from "the old days" this affects PCIE, SATA, and most all other buses' stability a lot.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, TrigrH said:

The issue is this new chip is locked to 4.55.

The thing is, it just doesn't matter. 5800X is not locked to anything, but realistically it can only operate at max of 4,85GHz and not a MHz higher despite having multiplier unlocked to factor 100x. It just doesn't matter if it's unlocked if it just doesn't work at ANY higher multiplier anyway due to dumb clock stretching.

Link to comment
Share on other sites

Link to post
Share on other sites

Considering that the new cache apparently can't tolerate the same voltages as the standard cache on the 5800X...what's the point? Its still pretty damn close and certainly a better buy than the i7 5775C was (the meme L4 cache though did have it coming close to or matching stock 4790K @ far lower clocks).

2 hours ago, Mojo-Jojo said:

Isn't BCLK-overclocking super unstable? IIRC from "the old days" this affects PCIE, SATA, and most all other buses' stability a lot.

Some NVME controllers can't handle a BCLK above 100MHZ. PCIe and SATA aren't the weakest link when doing a BCLK overclock anymore.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dabombinable said:

Considering that the new cache apparently can't tolerate the same voltages as the standard cache on the 5800X...what's the point? Its still pretty damn close and certainly a better buy than the i7 5775C was (the meme L4 cache though did have it coming close to or matching stock 4790K @ far lower clocks).

L4 on that thing was basically only used for iGPU. Skylake was originally hyped as this awesome new magical CPU with integrated eDRAM, but it ended up being a regular CPU and they only did this weird one model with it and it only used it for GPU. It's why I ended up buying older Haswell-E instead of Skylake. It was just a better buy.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, RejZoR said:

L4 on that thing was basically only used for iGPU. Skylake was originally hyped as this awesome new magical CPU with integrated eDRAM, but it ended up being a regular CPU and they only did this weird one model with it and it only used it for GPU. It's why I ended up buying older Haswell-E instead of Skylake. It was just a better buy.

Intel did do an i5 version for desktops as well (though it'd struggle badly going off modern 4T CPU). Broadwell might as well have never been released with how close to Skylake it was.

I might sill end up with one of the i7 as a curiosity. Realistically they are a very unique CPU (FIVR and eDRAM/L4 cache).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, RejZoR said:

L4 on that thing was basically only used for iGPU. Skylake was originally hyped as this awesome new magical CPU with integrated eDRAM, but it ended up being a regular CPU and they only did this weird one model with it and it only used it for GPU. It's why I ended up buying older Haswell-E instead of Skylake. It was just a better buy.

If you disabled the iGPU it used it as L4 cache. That being said, it wasn't good.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RejZoR said:

L4 on that thing was basically only used for iGPU. Skylake was originally hyped as this awesome new magical CPU with integrated eDRAM, but it ended up being a regular CPU and they only did this weird one model with it and it only used it for GPU. It's why I ended up buying older Haswell-E instead of Skylake. It was just a better buy.

L4 works on ANY data moving from memory, not limited to iGPU at all.

 

As for the rest of the history, Intel 14nm had a troubled start with Broadwell. Desktop CPUs were late, and it was essentially a launch so they could say they launched it. Instead of the normal desktop versions, they basically did something closer to mobile where eDRAM was more common. Skylake general IPC and max clocks are unquestionably better than Broadwell. Broadwell-C's USP was the L4 cache, and for the right application it was worth more than Skylake's superior clock and IPC.

 

1 hour ago, Dabombinable said:

Broadwell might as well have never been released with how close to Skylake it was.

On desktop it was a launch to say they launched it. Mobile I think had been around for a while longer.

 

8 minutes ago, Ydfhlx said:

If you disabled the iGPU it used it as L4 cache. That being said, it wasn't good.

It worked as L4 even with iGPU used. Basically for the right workload it was a beast. I'd expect the 5800X3D to fill a similar niche. That's the problem. Most consumer uses don't benefit significantly from it. AMD identified gaming as the area to market it in. 

 

I bought the i5 5675C at the time. It was about on a par with the i7-6700k with then fast 3200 ram in prime95 like compute. 5675C with single channel DDR3 could match 6700k with dual channel 3200 dual rank ram thanks to that L4 cache. Also had a 6600k with dual channel single rank ram, and it was far behind. As a compute node, dGPU wasn't needed or used.

 

The L4 cache was rated at 50GB/s, which would be comparable to dual channel 3200 ram in peak rates, not considering duplex or latency. I didn't see any benefit from overclocking the L4.

 

I'd love for Intel to do a modern consumer version of L4 but with a speed relevant to today's platforms. For the exact reason AMD gave I wouldn't expect to see it. Most things just wouldn't benefit from it, and it'll just add cost.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I know L4 wasn't exclusive to iGPU, but it literally did NOTHING to anything else. Only place where you could even see any improvement was when using iGPU for graphics. I don't remember if it did anything for discrete GPU...

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, RejZoR said:

I know L4 wasn't exclusive to iGPU, but it literally did NOTHING to anything else.

It particularly helped things that were memory bound, preferably with a working data subset fitting within that cache size. As I said, that helped a LOT in certain prime95 like workloads, where the cache pretty much negated the need for ram performance. I'll give that isn't an average consumer use case, but that is the same problem AMD have here with gaming as the only one they are pushing. If you run a workload that is pretty much unaffected by memory configuration, like Cinebench, then the cache does practically nothing. If you look on hwbot, there's many competitions where CPUs with eDRAM were banned because of the massive uplift it gave to some compute uses. None of this is new information.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

It particularly helped things that were memory bound, preferably with a working data subset fitting within that cache size. As I said, that helped a LOT in certain prime95 like workloads, where the cache pretty much negated the need for ram performance. I'll give that isn't an average consumer use case, but that is the same problem AMD have here with gaming as the only one they are pushing. If you run a workload that is pretty much unaffected by memory configuration, like Cinebench, then the cache does practically nothing. If you look on hwbot, there's many competitions where CPUs with eDRAM were banned because of the massive uplift it gave to some compute uses. None of this is new information.

None of which matters for a shitty Core i5... No one is going to run heavy workloads on some weak Core i5 just because it has L4.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, RejZoR said:

None of which matters for a shitty Core i5... No one is going to run heavy workloads on some weak Core i5 just because it has L4.

Keep in mind the era. An i5 was only slightly behind an i7, which for this use case would cost a good chunk more and the extra threads were of little to low value. Both of these were long before Zen 2 so AMD had nothing remotely close. Note early Ryzen gens were only around Sandy Bridge era performance in this area. That i5 would beat any AM4 CPU released before Zen 2. Intel HEDT platform of the time would be questionably better, but would certainly cost more. Likewise even server platforms of the time suffered similar limits.

 

For the same sort of use case, the people who bought Broadwell-C then are looking with interest at the X3D. The cache is worth are more than doubling the core count, since those cores can't be adequately fed otherwise due to connectivity inside AMD's CPUs. It will probably be the fastest consumer tier CPU by far (Intel included) in that area. You need to know the task.

 

Again, it is fair to say this is irrelevant to most people. But for those that are affected, it is a major difference. Understand your workload requirements for what you buy. 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/13/2022 at 4:17 AM, porina said:

I'd love for Intel to do a modern consumer version of L4 but with a speed relevant to today's platforms. For the exact reason AMD gave I wouldn't expect to see it. Most things just wouldn't benefit from it, and it'll just add cost.

Some context, really fast dual channel DDR4 kits can hit lower latency than the eDRAM in broadwell.

The main reason why people screamed eDRAM! was because anandtech reviews stick to JEDEC memory speeds (which are slow).


(source - https://www.aida64.com/news/aida64-support-skylake-broadwell-windows10-tizen)

http://www.aida64.com/sites/default/files/shot4_cachemem_broadwell_h.png

 


(source thrashzone on TPU forum)
aida64-4400-37-lat-png.214721

 

 

 


 

Quote

Note early Ryzen gens were only around Sandy Bridge era performance in this area.

 

If your use case was low resolution gaming with a relatively high end video card, that is true if you were comparing R7 vs i7(4C/8T). If you had a slower card or jacked up the resolution it didn't matter, also if you multitasked a bit, it didn't matter and the R7 came out ahead (my stutters in SC2 went away going from an OCed Ivy Bidge i5 to a Zen1 R7 - this was likely because my old CPU got pegged by chrome and a youtube video in the background). Also the skylake i5 (4C/4T) was generally slower than the R5 (6C/12T) in titles at launch and got worse relatively quickly (also the benchmarks are sterile so if you don't close everything... edge to MOAR COARS). Obviously 1 generation later from intel (6 months) fixed this when they also saw the value in MOAR COARS.

 

If your use case was "work station stuff" the $300ish R7 was about on par with the $1000 broadwell-e parts.

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, cmndr said:

Some context, really fast dual channel DDR4 kits can hit lower latency than the eDRAM in broadwell.

Broadwell-C was a 2015 DDR3 product, and it takes extreme DDR4 ram to beat it in latency? That's why I said I'd like to see a current day L4 implementation, not the one from 2015. Ignoring the time gap, there's a significant cost gap too from buying such fast ram. Broadwell did it out of the box.

 

27 minutes ago, cmndr said:

If your use case was

I did write earlier my use case was prime95-like compute. I give it is a niche use case, but it is a real use case. The characteristics can be boiled down to two considerations: FP64 performance, and feeding it data fast enough. The FPU in Ryzen before Zen 2 operated around half rate compared to Intel of the time. In other words, you needed two AMD cores to compete with one Intel core. Only with Zen 2 did they beat Skylake by about 4% in peak IPC and cores were roughly comparable between them. Before Ryzen AMD's FP64 IPC was closer to 1/4 or less compared to Intel.

 

Then we get to the feeding it data fast enough part. A quad core Skylake was generally choked by dual channel ram. Dual channel dual rank 3200, which was the fastest I had at the time, was close to not limiting core performance. Had 3600 dual rank been available at the time it probably would have done it. Note a difference between peak and effective ram bandwidth. Dual rank or 2DPC made a huge difference, about 25% perf over single rank 1DPC.

 

This is where Broadwell's L4 really shone. It was big enough to cover the data set, and the effective bandwidth was sufficient to allow the cores to run practically unlimited. I never found a way to measure that effective bandwidth with other tools. It didn't correlate to anything in aida64. It may be something to do with the specific read/write pattern of the workload differing from whatever aida64 does.

 

 

Back to the topic, the 5800X3D should be the best consumer tier CPU for this type of work for the reasons I gave above. Zen 3 is about +14% IPC over Skylake and the large cache means they can work flat out. Rocket Lake has more potential with AVX-512 but it is hindered by feeding it data fast enough. I don't have any data on Alder Lake but it probably would be limited by feeding also.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, porina said:

Broadwell-C was a 2015 DDR3 product, and it takes extreme DDR4 ram to beat it in latency? That's why I said I'd like to see a current day L4 implementation, not the one from 2015. Ignoring the time gap, there's a significant cost gap too from buying such fast ram. Broadwell did it out of the box.

 

I did write earlier my use case was prime95-like compute. I give it is a niche use case, but it is a real use case. The characteristics can be boiled down to two considerations: FP64 performance, and feeding it data fast enough. The FPU in Ryzen before Zen 2 operated around half rate compared to Intel of the time. In other words, you needed two AMD cores to compete with one Intel core. Only with Zen 2 did they beat Skylake by about 4% in peak IPC and cores were roughly comparable between them. Before Ryzen AMD's FP64 IPC was closer to 1/4 or less compared to Intel.

 

Then we get to the feeding it data fast enough part. A quad core Skylake was generally choked by dual channel ram. Dual channel dual rank 3200, which was the fastest I had at the time, was close to not limiting core performance. Had 3600 dual rank been available at the time it probably would have done it. Note a difference between peak and effective ram bandwidth. Dual rank or 2DPC made a huge difference, about 25% perf over single rank 1DPC.

 

This is where Broadwell's L4 really shone. It was big enough to cover the data set, and the effective bandwidth was sufficient to allow the cores to run practically unlimited. I never found a way to measure that effective bandwidth with other tools. It didn't correlate to anything in aida64. It may be something to do with the specific read/write pattern of the workload differing from whatever aida64 does.

To be fair, the big selling point of eDRAM was bandwidth moreso than latency. I just pointed out latency as it was the "harder" area to bridge, the worst case scenario in a sense.
 

I want to emphasize, it was mainly anandtech (using DDR4-2133 with CL15 timings) showing Broadwell (DDR3-1866 C9) doing as well as it did vs other parts.
https://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation

DDR4-3000 more or less matches the eDRAM in terms of bandwidth and could be tuned to near eDRAM levels of latency. This was doable in 2016 or 2017. My budget 32GB kit from early 2017 could hit the bandwidth target (though not with AMAZING timings) and got "close enough" on latency since it was running 30-50% faster at similar timings to what Anandtech had.


With that said, I don't doubt that some use cases DEFINITELY favor one architecture over another.

 

42 minutes ago, porina said:

Back to the topic, the 5800X3D should be the best consumer tier CPU for this type of work for the reasons I gave above. Zen 3 is about +14% IPC over Skylake and the large cache means they can work flat out. Rocket Lake has more potential with AVX-512 but it is hindered by feeding it data fast enough. I don't have any data on Alder Lake but it probably would be limited by feeding also.

Zen3 was something like 25% faster overall on IPC over SKL.

SKL had around 7% over Zen 1, Zen 2 was about 7% ahead, Zen 3 was about 25-30% ahead of SKL/KBL/CML - though CML clocks a bit higher. Are you thinking Zen 3 is about 14% faster than RKL?

 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, cmndr said:

Zen3 was something like 25% faster overall on IPC over SKL.

SKL had around 7% over Zen 1, Zen 2 was about 7% ahead, Zen 3 was about 25-30% ahead of SKL/KBL/CML - though CML clocks a bit higher. Are you thinking Zen 3 is about 14% faster than RKL?

My numbers were still for the prime95-like uses when not constrained by ram. So FP64 heavy. The math in Prime95 uses FP64 instructions to do huge FFTs. As the math library can be used by other software, those that do will behave similarly. The nearest other unrelated well known use case behaving similarly is Linpack, although caution needs to be given on test configuration and I've not personally tested that in detail.

 

If Skylake is the IPC reference:

 

Sandy Bridge -42%

Haswell -12%

Broadwell -17% (yes, lower than Haswell in this use case)

Skylake reference (implicitly also Kaby Lake, Coffee Lake, Comet Lake)

Zen 2 +4%

Zen 3 +14%

Rocket Lake +40% (one unit AVX-512)

Skylake-X >+80% (two unit AVX-512)

 

I don't recall the exact value for Zen and Zen+ but take it as roughly half that of Skylake. Pre-Ryzen AMD was roughly half that again. If rumours hold out that Zen 4 will contain their implementation of AVX-512 then they may take the lead again in consumer space.

 

Note: Prime95, like much other software, isn't static and may get further optimisations over time. The versions used supported Ryzen officially for the numbers above. Versions before Ryzen didn't recognise it and treated it as older AMD CPUs, not that it made that much difference. So if this test were to be repeated today there might be small variations from that and also OS security mitigations.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/13/2022 at 12:48 AM, TrigrH said:

Summary

The soon to release AMD Ryzen™ 7 5800X3D processor was meant to be locked from overclocking:

This lock has been bypassed before the CPU has officially launched.

FQMIQvGVcAEoSKC?format=png&name=900x900

 

Quotes

 

My thoughts

If AMD wanted to set a max voltage, they should of simply done that. No reason lock the entire CPU down.

 

Sources

Idk what it is about this CPU, but I am the least bit interested in it. I7 12700f for $314 ,5900x for $369, or 5800x3d for $449.... Hmm.... Maybe that's why. 

CPU-AMD Ryzen 7 7800X3D GPU- RTX 4070 SUPER FE MOBO-ASUS ROG Strix B650E-E Gaming Wifi RAM-32gb G.Skill Trident Z5 Neo DDR5 6000cl30 STORAGE-2x1TB Seagate Firecuda 530 PCIE4 NVME PSU-Corsair RM1000x Shift COOLING-EK-AIO 360mm with 3x Lian Li P28 + 4 Lian Li TL120 (Intake) CASE-Phanteks NV5 MONITORS-ASUS ROG Strix XG27AQ 1440p 170hz+Gigabyte G24F 1080p 180hz PERIPHERALS-Lamzu Maya+ 4k Dongle+LGG Saturn Pro Mousepad+Nk65 Watermelon (Tangerine Switches)+Autonomous ErgoChair+ AUDIO-RODE NTH-100+Schiit Magni Heresy+Motu M2 Interface

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 months later...

I am trying to OC my 5800x3d with no success. I am using a asus Tuf Gaming x570plus with gskill 3200 at 16-18-18-38. Can you send to a website with a solution or give me some advice?

220718123010.BMP

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×