Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
darKz_here

G.Skill 16 GB 3200Mhz, CL14 or CL16?

1 minute ago, darKz_here said:

To be honest, I can't even say anything about it before trying them myself. But I'm truly convicted that it will make a huge difference to me even those tiny little nanoseconds when I am focused since I guess I'm way to sensitive when comes to reacting time and feeling it as everywere where I played I always could notice the delays no matter how tiny they were. I saw someone said that Lower CLs can have impact "in ultra-fast tasks. High FPS Gaming, most of the Adobe Stack and a few other places." and I guess for an Professional CS GO player should be noticable already. This might really sound pathetic like as I'm buying those things to get an competitive edge on those FPS games but trust me it's not. I never excused about losing on any game when someone have played better than me on certain moments and fairly winning, the thing is you just can't accept the fact that they are brutally abusing their High End Pc's timings while being fully toxic towards you like they can always do better despite how bad they play, you are always being outperformed due to the lack of the visual informations you get, simply just thinking logically is enough of understanding it. Anyways, I still want to have an high end pc that can do way much more than just playing those games, since I'm tired from low-end systems.

Not at all dude. If you can perceive the latency differences (I can't, but a lot of very competitive gamers can), then it doesn't fucking matter what someone else thinks. A lot of people say 240Hz is similarly useless, but a properly done 240Hz monitor can be useful, again for people who can actually notice the difference (LTT did a vid a bit ago with some pro CS:GO guys and they did indeed benefit from that difference). If your rig isn't up to snuff for your needs, that's why you upgrade. Tis how the whole PCMR works lol, at the core it's about loving PC hardware and games, and upgrading that shit if it isn't performing the way you like, or even because it's fun (I haven't needed an upgrade since a couple years ago, but I still do purely because I like the hardware itself). 

And yeah, if you've hit the reaction/skill cap and feel your hardware is limiting you, that is 110% a valid reason to worry about small latency differences like this. 

Recommended Posts

Posted · Original PosterOP

Will there be any difference for Superior Timing Sensitivity abilities on CS GO Competitive playing between those CL 14 vs CL16 latency timings? Paired with i9-9900KS and RTX 2070 Super for pure gaming as well, and nothing else?

Link to post
Share on other sites

Of course CL 14 is by far better than CL 16


CPU:i7 9700k 5047.5Mhz All Cores Mobo: MSI MPG Z390 Gaming Edge AC, RAM:Corsair Vengeance LPX 16GB 3200MHz DDR4 OC 3467Mhz GPU:MSI RTX 2070 ARMOR 8GB OC Storage:Samsung SSD 970 EVO NVMe M.2 250GB, 2x SSD ADATA PRO SP900 256GB, HDD WD CB 2TB, HDD GREEN 2TB PSU: Seasonic focus plus 750w Gold Display(s): 1st: LG 27UK650-W, 4K, IPS, HDR10, 10bit(8bit + A-FRC). 2nd: Samsung 24" LED Monitor (SE390), Cooling:Fazn CPU Cooler Aero 120T Push/pull Corsair ML PRO Fans Keyboard: Corsair K95 Platinum RGB mx Rapidfire Mouse:Razer Naga Chroma  Headset: Razer Kraken 7.1 Chroma Sound: Logitech X-540 5.1 Surround Sound Speaker Case: Modded Case Inverted, 5 intake 120mm, one exhaust 120mm.

Link to post
Share on other sites

Hi, im also into CSGO competitive gameplay, and with my ryzen 3700x i went with cl16 because its cheaper and i still get above 300 fps average, with drops to like 240-250 fps in smoke executes with a lot o f bullets in maps like inferno, so i would say for intel (that its not as memory speed hunger as ryzen) you should be fine with CL16. Now, if the price is around the same then go with cl14


Main rig: CPU: Ryzen 7 3700X cooled by a Noctua NH-U14S; Mem: 16 GB(2x8) G.Skill TridentZ White 3200 MHz; GPU: EVGA RTX 2070 XC; MOBO: MSI B450 Tomahawk; Storage: XPG Spectrix S40G M.2 512GB SSD; Kingston A400 480GB SSD; 1TB Western Digital Blue HDD; PSU: Corsair CX750M Semi-modular (80+ bronze); CASE: Thermaltake Commander C36.

 

Secondary rig: CPU: Intel Core i7 4790 @3.60GHz(Turbo @4.00GHz) cooled by Corsair H60; Mem: G.Skill RipJawsX DDR3 16GB(4x4) OC@2133MHz (11-11-11-30 (1.6V)); GPU: None(for now); MOBO: Gigabyte Z87-HD3; Storage:HyperX 120GB; PSU: Thermaltake SmarT series 750w (80+ bronze); CASE: Thermaltake Chaser A41.

Link to post
Share on other sites

CL16 will be fine. Intel CPUs don't benefit as much from fast memory as Ryzen, so anything above 3000-3200MHz will only bring a marginal performance improvement in some tasks, most likely unnoticeable even.


Desktop: Intel Core i9-9900K (w/ TG Hydronaut) | be quiet! Dark Rock Pro 4 | ASUS Strix Z390-F | Corsair Vengeance LPX 32GB 3000MHz CL15 | Intel 660p 1TB | Samsung 860 EVO 500GB | WD Green 2TB | EVGA GeForce RTX 2070 SUPER XC Ultra (w/ TG Hydronaut) | Corsair RM650x | Fractal Design Define R6 Blackout USB-C

Displays: BenQ BL2420PT & Alienware AW2521HF

Peripherals: Steelseries Rival 600 & Logitech MX Master 3 | Ducky Shine 7 Gunmetal (Cherry MX Brown) | Sennheiser Game One

Laptop:  Apple MacBook Pro 13" 2018 - i5-8259U | 8GB RAM | 512GB SSD

Link to post
Share on other sites
14 minutes ago, darKz_here said:

Will there be any difference for Superior Timing Sensitivity abilities on CS GO Competitive playing between those CL 14 vs CL16 latency timings? Paired with i9-9900KS and RTX 2070 Super for pure gaming as well, and nothing else?

the price difference right now is about 30bucks, the best case difference is 3%~ in fps in certain situations, but since ur going with a 9900ks, i'd say cl14,


9900k 1.39v 5.2 83C 185w 1.24v 4.9 60C 135w 1.05v 4.5 90w 50C (doing some testing for hot days) all-2avx cinebench/blender temps. avx voltages in prime. ll D15 ll Z390 taichi ult 1.60 bios fixed LLC voltage gaps ll gskill 2x8gb cl16 ddr4000 bdie 1.42v ll EVGA 2080 ti XC (duo fan skinny) 1995//7600 power limited 79C max, stock voltage (really bad ocer) ll 2x samsung 860 evo 500gb raid 0 ll 500gb nvme 970 evo ll Corsair graphite 780T ll EVGA G2 1300w ll Windows 10 Pro ll NEC PA272w (movie, work mon) 2k60 14bit lut ll Predator X27 4k144 hdr (rgb98)

Link to post
Share on other sites
2 minutes ago, Mateyyy said:

CL16 will be fine. Intel CPUs don't benefit as much from fast memory as Ryzen, so anything above 3000-3200MHz will only bring a marginal performance improvement in some tasks, most likely unnoticeable even.

The CPU does not. 

The games often do. A lot of games get measurable performance increases from faster RAM or tighter timings, depends on title. Jayztwocents even did some videos on this recently:
 

 

 


X58-X79-X99-X299 lads: Intel HEDT Xeon/i7 Megathread - Murica (But International) Parrot Gang

 

Big Rig (Archived) - (Current Main) - i7 6950X @ 4.2/3.5 1.24v/1.2v core/uncore - 80C max under P95 load - Custom Loop (CPU only): 2x 360GTS with EK-ZMT/Stubbies and EK D5 pump/res combo - EVGA X99 Classified - 48GB RAM [4x8GB HyperX Predator DDR4 + 4x4GB EVGA SSC DDR4] @3200Mhz CL16-18-18-36 CR2 - Nvidia FE 2060 Super - 1TB 970 Evo + 250GB 960 Evo - Corsair RM1000i - Phanteks Enthoo Evolv ATX TG - 6x iPPC NF-F12 2000 
 

X79 Rig (Done) - (Alt Rig 1)- i7 4930K @ 4.5GHz - EVGA CLC 280 w/NF-P14s fans - EVGA X79 Dark - 16GBGB (4x4GB) Corsair Vengeance DDR3 @ 1600Mhz CL9 XMP - EVGA XC Ultra 1660 Ti - MX500 1TB + 2x Seagate Barracuda Compute 2TB - EVGA 1000W G3 w/CableMod PRO Carbon cables - Phanteks P400 (White) - NF-P12

 

X58 Rig (Mostly Done) - (Alt Rig 2) - Xeon X5675 @ 4.4/3.7 core/uncore- NH-D15S - EVGA X58 Classified SLI 4-Way - 24GB (3x8GB) HyperX Savage Red DDR3 @ 1750Mhz CL9-10-10-27 - 2x EVGA Classified 780s - 120GB HyperX SSD - ASUS or Liteon DVD-RW drive, forget which - EVGA 1600W T2 - Corsair 750D - 5x iPPC NF-A14 3000 PWM

 

2019 13" rMBP (i5/8GB/256GB) {work} - 2012 13" MBP (i5/16GB/525GB) {mine} - iPhone 11 Pro Max + Apple Watch S3 42mm - iPod Classic 6G 80GB running Rockbox + iPod Classic 5.5G Enhanced 30GB also on Rockbox - iPhone X - iPhone 4S on iOS 6.1.3

 

whip and nae nae

Link to post
Share on other sites

If you're buying 3200MHz CL14 just to run it as is, you're doing wrong. We buy that for Samsung B die which has really good overclockability. If you run XMP settings, 3200 CL16 is enough. Not like Intel systems rely on memory that much either, and you can run csgo on something much worse


CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: 1TB HP EX920 PCIe x4 M.2 SSD + 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172), 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to post
Share on other sites
Posted · Original PosterOP
4 hours ago, Constantin said:

Of course CL 14 is by far better than CL 16

 

By far better, you mean it's way faster for a pro gamer in high level skills on CS GO Competitive Gaming? or simply way faster on processing the input/outputs within the  system without causing any system latencies or any others on gaming than the CL 16 one?
 

4 hours ago, Mateus Montemor said:

Hi, im also into CSGO competitive gameplay, and with my ryzen 3700x i went with cl16 because its cheaper and i still get above 300 fps average, with drops to like 240-250 fps in smoke executes with a lot o f bullets in maps like inferno, so i would say for intel (that its not as memory speed hunger as ryzen) you should be fine with CL16. Now, if the price is around the same then go with cl14

I don't know If I was clear enough, and i'm not sure what kind of rank you have in CS GO and the way you play the game but I still get your point. The reason I'm asking about those differences, is cause I was brutally hardwarely bottlenecked my whole life when it comes to competitive gaming even to this day I still am , sometime It even leads me to think I am not good enough due to the highely bottleneck I was experiencing. The reason I'm buying the i9-9900KS is about the speed and faster input/output processing it provides since it is the world's fastest processor when it comes to competitive gaming, cause to me the latency matters hugely more than the frames itself, I know having higher frames provides less latencies but there's tons of people who have 300fps+ but still experience huge input lags. So that's why i'm asking if there's a difference between the CL 14 and CL 16 rams since I don't want even a single part of the pc to be bottlenecking the whole work of the motherboard.

4 hours ago, Mateyyy said:

CL16 will be fine. Intel CPUs don't benefit as much from fast memory as Ryzen, so anything above 3000-3200MHz will only bring a marginal performance improvement in some tasks, most likely unnoticeable even.

I get your point as well, the fps doesn't even bother me. The point is the input and output latencies, as well as the precise stability on handling those frames. Simply I want not even a single interruption, if this makes more sense.

 

4 hours ago, xg32 said:

the price difference right now is about 30bucks, the best case difference is 3%~ in fps in certain situations, but since ur going with a 9900ks, i'd say cl14,

Yeah, the whole reason of buying the CPU is about removing the bottlenecks of my skills and abillities. And playing the game on Superior Level of Competitive Gaming. On my understanding I think Faster Ram and Low Latencies should definitely impact the speed of the input/outputs of the keyboard/mouse and the game itself since it will have no interruptions If i'm not wrong.
 

4 hours ago, Zando Bob said:

The CPU does not. 

The games often do. A lot of games get measurable performance increases from faster RAM or tighter timings, depends on title. Jayztwocents even did some videos on this recently:
 

 

 

As I know the CPU should already benefit even more when it has lower ram latencies when processing the tasks since it doesn't have to wait for the ram? Despite if there is tiny nanosecond differences between the CL14 and CL16 RAMs, the CL14 will still perform better? The point is to get the most precise stability on the frames as well as the lower latencies possible on the input/outputs.

4 hours ago, Jurrunio said:

If you're buying 3200MHz CL14 just to run it as is, you're doing wrong. We buy that for Samsung B die which has really good overclockability. If you run XMP settings, 3200 CL16 is enough. Not like Intel systems rely on memory that much either, and you can run csgo on something much worse


By "as is" you mean running it on 3200MHz CL14? I'm actually not totally a beginner when it comes to PC as I am even studying the Science Computer, and was dealing with since I was a little kid. The point of buying those high end devices is about removing that "massive bottleneck" I'm actually having on my abillities and not just run it however it can. There's almost every single day I'm getting killed so easily buy those Pay2Win Pros who brutally abuses their High End PC's advantages to outperform me without even a little effort. And trust me this thing hurts so bad, when you know that you can always do way better.

Link to post
Share on other sites
1 minute ago, darKz_here said:

As I know the CPU should already benefit even more when it has lower ram latencies when processing the tasks since it doesn't have to wait for the ram? Despite if there is tiny nanosecond differences between the CL14 and CL16 RAMs, the CL14 will still perform better? The point is to get the most precise stability on the frames as well as the lower latencies possible on the input/outputs.

Yes, it's just less noticeable in CPU only benchmarks on Intel. AMD CPU cores talk to each other using Infinity Fabric, which is closely related to RAM speed and timings, so it gets a much more noticeable boost in CPU benches from faster RAM. Intel is on a different arch (ringbus, and mesh on X299 chips). Not that the gains aren't there at all, they just aren't as noticeable in CPU only benches. In overall system performance, faster RAM has never been a negative AFAIK? Pretty much always better for basically anything. On something like DDR3 this becomes very noticeable, with my old X58 stuff, even the Windows 10 start menu is noticeably snappier when I've tweaked the RAM to run faster with the tightest timings I can get (about 2100Mhz CL10-11-11-31, up from a stock of 1600Mhz CL9-9-9-whatever it auto decides on). 


X58-X79-X99-X299 lads: Intel HEDT Xeon/i7 Megathread - Murica (But International) Parrot Gang

 

Big Rig (Archived) - (Current Main) - i7 6950X @ 4.2/3.5 1.24v/1.2v core/uncore - 80C max under P95 load - Custom Loop (CPU only): 2x 360GTS with EK-ZMT/Stubbies and EK D5 pump/res combo - EVGA X99 Classified - 48GB RAM [4x8GB HyperX Predator DDR4 + 4x4GB EVGA SSC DDR4] @3200Mhz CL16-18-18-36 CR2 - Nvidia FE 2060 Super - 1TB 970 Evo + 250GB 960 Evo - Corsair RM1000i - Phanteks Enthoo Evolv ATX TG - 6x iPPC NF-F12 2000 
 

X79 Rig (Done) - (Alt Rig 1)- i7 4930K @ 4.5GHz - EVGA CLC 280 w/NF-P14s fans - EVGA X79 Dark - 16GBGB (4x4GB) Corsair Vengeance DDR3 @ 1600Mhz CL9 XMP - EVGA XC Ultra 1660 Ti - MX500 1TB + 2x Seagate Barracuda Compute 2TB - EVGA 1000W G3 w/CableMod PRO Carbon cables - Phanteks P400 (White) - NF-P12

 

X58 Rig (Mostly Done) - (Alt Rig 2) - Xeon X5675 @ 4.4/3.7 core/uncore- NH-D15S - EVGA X58 Classified SLI 4-Way - 24GB (3x8GB) HyperX Savage Red DDR3 @ 1750Mhz CL9-10-10-27 - 2x EVGA Classified 780s - 120GB HyperX SSD - ASUS or Liteon DVD-RW drive, forget which - EVGA 1600W T2 - Corsair 750D - 5x iPPC NF-A14 3000 PWM

 

2019 13" rMBP (i5/8GB/256GB) {work} - 2012 13" MBP (i5/16GB/525GB) {mine} - iPhone 11 Pro Max + Apple Watch S3 42mm - iPod Classic 6G 80GB running Rockbox + iPod Classic 5.5G Enhanced 30GB also on Rockbox - iPhone X - iPhone 4S on iOS 6.1.3

 

whip and nae nae

Link to post
Share on other sites
4 minutes ago, darKz_here said:

By "as is" you mean running it on 3200MHz CL14?

Yes

 

Even if you actively overclock it, there are cheaper kits of Samsung B-die at higher frequencies just because most people only know that 3200MHz CL14 is b-die, for example 3600MHz 16-16-16-36 and 4000MHz 19-19-19-39


CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: 1TB HP EX920 PCIe x4 M.2 SSD + 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172), 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to post
Share on other sites
Posted · Original PosterOP
31 minutes ago, Zando Bob said:

Yes, it's just less noticeable in CPU only benchmarks on Intel. AMD CPU cores talk to each other using Infinity Fabric, which is closely related to RAM speed and timings, so it gets a much more noticeable boost in CPU benches from faster RAM. Intel is on a different arch (ringbus, and mesh on X299 chips). Not that the gains aren't there at all, they just aren't as noticeable in CPU only benches. In overall system performance, faster RAM has never been a negative AFAIK? Pretty much always better for basically anything. On something like DDR3 this becomes very noticeable, with my old X58 stuff, even the Windows 10 start menu is noticeably snappier when I've tweaked the RAM to run faster with the tightest timings I can get (about 2100Mhz CL10-11-11-31, up from a stock of 1600Mhz CL9-9-9-whatever it auto decides on). 

But still the CL14 one should impact the timings on competitive gaming in real time performance? Like being snappier than the CL16 one? 

Link to post
Share on other sites
Just now, darKz_here said:

But still the CL14 one should impact the timings on competitive gaming in real time performance? Like being snappier than the CL16 one? 

Should be. It won't be a massive difference, it's more of a "why not?" thing. You're already going for a 9900KS, no reason to not get a damn solid kit of RAM to go along with it. 


X58-X79-X99-X299 lads: Intel HEDT Xeon/i7 Megathread - Murica (But International) Parrot Gang

 

Big Rig (Archived) - (Current Main) - i7 6950X @ 4.2/3.5 1.24v/1.2v core/uncore - 80C max under P95 load - Custom Loop (CPU only): 2x 360GTS with EK-ZMT/Stubbies and EK D5 pump/res combo - EVGA X99 Classified - 48GB RAM [4x8GB HyperX Predator DDR4 + 4x4GB EVGA SSC DDR4] @3200Mhz CL16-18-18-36 CR2 - Nvidia FE 2060 Super - 1TB 970 Evo + 250GB 960 Evo - Corsair RM1000i - Phanteks Enthoo Evolv ATX TG - 6x iPPC NF-F12 2000 
 

X79 Rig (Done) - (Alt Rig 1)- i7 4930K @ 4.5GHz - EVGA CLC 280 w/NF-P14s fans - EVGA X79 Dark - 16GBGB (4x4GB) Corsair Vengeance DDR3 @ 1600Mhz CL9 XMP - EVGA XC Ultra 1660 Ti - MX500 1TB + 2x Seagate Barracuda Compute 2TB - EVGA 1000W G3 w/CableMod PRO Carbon cables - Phanteks P400 (White) - NF-P12

 

X58 Rig (Mostly Done) - (Alt Rig 2) - Xeon X5675 @ 4.4/3.7 core/uncore- NH-D15S - EVGA X58 Classified SLI 4-Way - 24GB (3x8GB) HyperX Savage Red DDR3 @ 1750Mhz CL9-10-10-27 - 2x EVGA Classified 780s - 120GB HyperX SSD - ASUS or Liteon DVD-RW drive, forget which - EVGA 1600W T2 - Corsair 750D - 5x iPPC NF-A14 3000 PWM

 

2019 13" rMBP (i5/8GB/256GB) {work} - 2012 13" MBP (i5/16GB/525GB) {mine} - iPhone 11 Pro Max + Apple Watch S3 42mm - iPod Classic 6G 80GB running Rockbox + iPod Classic 5.5G Enhanced 30GB also on Rockbox - iPhone X - iPhone 4S on iOS 6.1.3

 

whip and nae nae

Link to post
Share on other sites
Posted · Original PosterOP
2 hours ago, Jurrunio said:

Yes

 

Even if you actively overclock it, there are cheaper kits of Samsung B-die at higher frequencies just because most people only know that 3200MHz CL14 is b-die, for example 3600MHz 16-16-16-36 and 4000MHz 19-19-19-39

1 hour ago, Zando Bob said:

Should be. It won't be a massive difference, it's more of a "why not?" thing. You're already going for a 9900KS, no reason to not get a damn solid kit of RAM to go along with it. 

 

To be honest, I can't even say anything about it before trying them myself. But I'm truly convicted that it will make a huge difference to me even those tiny little nanoseconds when I am focused since I guess I'm way to sensitive when comes to reacting time and feeling it as everywere where I played I always could notice the delays no matter how tiny they were. I saw someone said that Lower CLs can have impact "in ultra-fast tasks. High FPS Gaming, most of the Adobe Stack and a few other places." and I guess for an Professional CS GO player should be noticable already. This might really sound pathetic like as I'm buying those things to get an competitive edge on those FPS games but trust me it's not. I never excused about losing on any game when someone has played better than me on certain moments and fairly winning, the thing is you just can't accept the fact that they are brutally abusing their High End Pc's timings while being fully toxic towards you like they can always do better despite how bad they play, you are always being outperformed due to the lack of the visual informations you get, simply just thinking logically is enough of understanding it. Anyways, I still want to have an high end pc that can do way much more than just playing those games, since I'm tired from low-end systems.

Link to post
Share on other sites
Posted · Best Answer
1 minute ago, darKz_here said:

To be honest, I can't even say anything about it before trying them myself. But I'm truly convicted that it will make a huge difference to me even those tiny little nanoseconds when I am focused since I guess I'm way to sensitive when comes to reacting time and feeling it as everywere where I played I always could notice the delays no matter how tiny they were. I saw someone said that Lower CLs can have impact "in ultra-fast tasks. High FPS Gaming, most of the Adobe Stack and a few other places." and I guess for an Professional CS GO player should be noticable already. This might really sound pathetic like as I'm buying those things to get an competitive edge on those FPS games but trust me it's not. I never excused about losing on any game when someone have played better than me on certain moments and fairly winning, the thing is you just can't accept the fact that they are brutally abusing their High End Pc's timings while being fully toxic towards you like they can always do better despite how bad they play, you are always being outperformed due to the lack of the visual informations you get, simply just thinking logically is enough of understanding it. Anyways, I still want to have an high end pc that can do way much more than just playing those games, since I'm tired from low-end systems.

Not at all dude. If you can perceive the latency differences (I can't, but a lot of very competitive gamers can), then it doesn't fucking matter what someone else thinks. A lot of people say 240Hz is similarly useless, but a properly done 240Hz monitor can be useful, again for people who can actually notice the difference (LTT did a vid a bit ago with some pro CS:GO guys and they did indeed benefit from that difference). If your rig isn't up to snuff for your needs, that's why you upgrade. Tis how the whole PCMR works lol, at the core it's about loving PC hardware and games, and upgrading that shit if it isn't performing the way you like, or even because it's fun (I haven't needed an upgrade since a couple years ago, but I still do purely because I like the hardware itself). 

And yeah, if you've hit the reaction/skill cap and feel your hardware is limiting you, that is 110% a valid reason to worry about small latency differences like this. 


X58-X79-X99-X299 lads: Intel HEDT Xeon/i7 Megathread - Murica (But International) Parrot Gang

 

Big Rig (Archived) - (Current Main) - i7 6950X @ 4.2/3.5 1.24v/1.2v core/uncore - 80C max under P95 load - Custom Loop (CPU only): 2x 360GTS with EK-ZMT/Stubbies and EK D5 pump/res combo - EVGA X99 Classified - 48GB RAM [4x8GB HyperX Predator DDR4 + 4x4GB EVGA SSC DDR4] @3200Mhz CL16-18-18-36 CR2 - Nvidia FE 2060 Super - 1TB 970 Evo + 250GB 960 Evo - Corsair RM1000i - Phanteks Enthoo Evolv ATX TG - 6x iPPC NF-F12 2000 
 

X79 Rig (Done) - (Alt Rig 1)- i7 4930K @ 4.5GHz - EVGA CLC 280 w/NF-P14s fans - EVGA X79 Dark - 16GBGB (4x4GB) Corsair Vengeance DDR3 @ 1600Mhz CL9 XMP - EVGA XC Ultra 1660 Ti - MX500 1TB + 2x Seagate Barracuda Compute 2TB - EVGA 1000W G3 w/CableMod PRO Carbon cables - Phanteks P400 (White) - NF-P12

 

X58 Rig (Mostly Done) - (Alt Rig 2) - Xeon X5675 @ 4.4/3.7 core/uncore- NH-D15S - EVGA X58 Classified SLI 4-Way - 24GB (3x8GB) HyperX Savage Red DDR3 @ 1750Mhz CL9-10-10-27 - 2x EVGA Classified 780s - 120GB HyperX SSD - ASUS or Liteon DVD-RW drive, forget which - EVGA 1600W T2 - Corsair 750D - 5x iPPC NF-A14 3000 PWM

 

2019 13" rMBP (i5/8GB/256GB) {work} - 2012 13" MBP (i5/16GB/525GB) {mine} - iPhone 11 Pro Max + Apple Watch S3 42mm - iPod Classic 6G 80GB running Rockbox + iPod Classic 5.5G Enhanced 30GB also on Rockbox - iPhone X - iPhone 4S on iOS 6.1.3

 

whip and nae nae

Link to post
Share on other sites
6 minutes ago, darKz_here said:

I saw someone said that Lower CLs can have impact "in ultra-fast tasks.

It only makes sense in relation to frequency, so what depends is the ratio.


CPU: i7-2600K 4751MHz 1.44V (software) --> 1.47V at the back of the socket Motherboard: Asrock Z77 Extreme4 (BCLK: 103.3MHz) CPU Cooler: Noctua NH-D15 RAM: Adata XPG 2x8GB DDR3 (XMP: 2133MHz 10-11-11-30 CR2, custom: 2203MHz 10-11-10-26 CR1 tRFC:230 tREFI:14000) GPU: Asus GTX 1070 Dual (Super Jetstream vbios, +70(2025-2088MHz)/+400(8.8Gbps)) SSD: Samsung 840 Pro 256GB (main boot drive), Transcend SSD370 128GB PSU: Seasonic X-660 80+ Gold Case: Antec P110 Silent, 5 intakes 1 exhaust Monitor: AOC G2460PF 1080p 144Hz (150Hz max w/ DP, 121Hz max w/ HDMI) TN panel Keyboard: Logitech G610 Orion (Cherry MX Blue) with SteelSeries Apex M260 keycaps Mouse: BenQ Zowie FK1

 

Model: HP Omen 17 17-an110ca CPU: i7-8750H (0.125V core & cache, 50mV SA undervolt) GPU: GTX 1060 6GB Mobile (+80/+450, 1650MHz~1750MHz 0.78V~0.85V) RAM: 8+8GB DDR4-2400 18-17-17-39 2T Storage: 1TB HP EX920 PCIe x4 M.2 SSD + 1TB Seagate 7200RPM 2.5" HDD (ST1000LM049-2GH172), 128GB Toshiba PCIe x2 M.2 SSD (KBG30ZMV128G) gone cooking externally Monitor: 1080p 126Hz IPS G-sync

 

Desktop benching:

Cinebench R15 Single thread:168 Multi-thread: 833 

SuperPi (v1.5 from Techpowerup, PI value output) 16K: 0.100s 1M: 8.255s 32M: 7m 45.93s

Link to post
Share on other sites
Posted · Original PosterOP
1 hour ago, Zando Bob said:

Not at all dude. If you can perceive the latency differences (I can't, but a lot of very competitive gamers can), then it doesn't fucking matter what someone else thinks. A lot of people say 240Hz is similarly useless, but a properly done 240Hz monitor can be useful, again for people who can actually notice the difference (LTT did a vid a bit ago with some pro CS:GO guys and they did indeed benefit from that difference). If your rig isn't up to snuff for your needs, that's why you upgrade. Tis how the whole PCMR works lol, at the core it's about loving PC hardware and games, and upgrading that shit if it isn't performing the way you like, or even because it's fun (I haven't needed an upgrade since a couple years ago, but I still do purely because I like the hardware itself). 

And yeah, if you've hit the reaction/skill cap and feel your hardware is limiting you, that is 110% a valid reason to worry about small latency differences like this. 

That's what I'm talking about. That linus video about those differences is a great example for those who have doubts on 240Hz monitors that still think it is placebo or not worth it due to the fact that the can't see the differences, cause even that guy who was a Pro CS GO player called Shroud who is known for his really fast reaction times and aim couldn't catch up with the hardware and low framerates. I actually admire the way you felt my shit and explained it word-by-word despite being on such age, as reaction time decreases with age. I hope I will get rid sooner of such bottlenecks cuz i'm tired from all this low-end shit anymore.

Link to post
Share on other sites
30 minutes ago, darKz_here said:

That's what I'm talking about. That linus video about those differences is a great example for those who have doubts on 240Hz monitors that still think it is placebo or not worth it due to the fact that the can't see the differences, cause even that guy who was a Pro CS GO player called Shroud who is known for his really fast reaction times and aim couldn't catch up with the hardware and low framerates. I actually admire the way you felt my shit and explained it word-by-word despite being on such age, as reaction time decreases with age. I hope I will get rid sooner of such bottlenecks cuz i'm tired from all this low-end shit anymore.

CS and OW players have always been pretty vocal about how 240hz just feels better, and after visibly seeing a difference between 100 and 144hz, i'm sold. It's just quite amusing how "240hz is overkill" is still the popular opinion,

 

If you do get the cl14 ram, you can further optimize the timings too for additional gains, there's a gamers nexus video on it, i'd personally try to figure out a way to get it to 3600, or/and use 4 sticks.

 

 


9900k 1.39v 5.2 83C 185w 1.24v 4.9 60C 135w 1.05v 4.5 90w 50C (doing some testing for hot days) all-2avx cinebench/blender temps. avx voltages in prime. ll D15 ll Z390 taichi ult 1.60 bios fixed LLC voltage gaps ll gskill 2x8gb cl16 ddr4000 bdie 1.42v ll EVGA 2080 ti XC (duo fan skinny) 1995//7600 power limited 79C max, stock voltage (really bad ocer) ll 2x samsung 860 evo 500gb raid 0 ll 500gb nvme 970 evo ll Corsair graphite 780T ll EVGA G2 1300w ll Windows 10 Pro ll NEC PA272w (movie, work mon) 2k60 14bit lut ll Predator X27 4k144 hdr (rgb98)

Link to post
Share on other sites
Posted · Original PosterOP
On 1/24/2020 at 9:49 PM, xg32 said:

CS and OW players have always been pretty vocal about how 240hz just feels better, and after visibly seeing a difference between 100 and 144hz, i'm sold. It's just quite amusing how "240hz is overkill" is still the popular opinion,

 

If you do get the cl14 ram, you can further optimize the timings too for additional gains, there's a gamers nexus video on it, i'd personally try to figure out a way to get it to 3600, or/and use 4 sticks.

 

 

I never really played Overwatch. But I think it pretty much depends on experiencing it and actually feeling it. I totally understand those who can't see the difference betweeen 144Hz and 240Hz since the lack of motivation, pasion and confidence overall to get better or being passionated on something has a huge role here. As when it comes to 144Hz and 240Hz difference on miliseconds is just 2.7777777777777 miliseconds (3ms~), while 60Hz to 144Hz is 9.7222222222226 (10ms~) (the simplest and best way of seeing the 144Hz vs 240Hz difference is the mouse cursor stepping in desktop 144Hzvs240Hz) which is why there's huge difference due to the highly reduced motion blur, tearings and sttuters. While in the other side 144Hz to 240Hz is only 3 ms difference which in order to really see the difference of such tiny little amount of reduced timing, you have to actually feel it ,be synced with it and the whole game. As it's not visually really noticable. But I guess when the 1000Hz monitors will get released in future everybody will actually feel the difference between the older high refresh rate monitors because the motion blur with be actually reduced to the point that will not even be visible or have to be called as motion blur anymore. (https://www.blurbusters.com/blur-busters-law-amazing-journey-to-future-1000hz-displays-with-blurfree-sample-and-hold/)

As for the optimising part on the CL 14 Ram, this is actually another reason to consider getting such lower cas latencies.

Link to post
Share on other sites
8 hours ago, darKz_here said:

 

I don't know If I was clear enough, and i'm not sure what kind of rank you have in CS GO and the way you play the game but I still get your point. The reason I'm asking about those differences, is cause I was brutally hardwarely bottlenecked my whole life when it comes to competitive gaming even to this day I still am , sometime It even leads me to think I am not good enough due to the highely bottleneck I was experiencing. The reason I'm buying the i9-9900KS is about the speed and faster input/output processing it provides since it is the world's fastest processor when it comes to competitive gaming, cause to me the latency matters hugely more than the frames itself, I know having higher frames provides less latencies but there's tons of people who have 300fps+ but still experience huge input lags. So that's why i'm asking if there's a difference between the CL 14 and CL 16 rams since I don't want even a single part of the pc to be bottlenecking the whole work of the motherboard.

I get your point as well, the fps doesn't even bother me. The point is the input and output latencies, as well as the precise stability on handling those frames. Simply I want not even a single interruption, if this makes more sense.

 

Im into very competitive CSGO, so i did a lot of research before buying my PC focused into this, so i would suggest to go CL16 and use the 30$ difference to focus on a 240HZ  with 1ms or less response time monitor, im currently running AOC AGON AG251FZ2 and its pretty good. Also, as Source engine is old, nowadays CPU will run the game exactly the same, I have a friend with a 3600x, one with a 9900k, and they get the same Framerates/Frametimes as me. So, as a final awnser, i would say get the CL16 (No input Lag diference from CL16 to CL14 RAM) and focus on the monitor, its the most important thing for CS, alongside the CPU itself...


Main rig: CPU: Ryzen 7 3700X cooled by a Noctua NH-U14S; Mem: 16 GB(2x8) G.Skill TridentZ White 3200 MHz; GPU: EVGA RTX 2070 XC; MOBO: MSI B450 Tomahawk; Storage: XPG Spectrix S40G M.2 512GB SSD; Kingston A400 480GB SSD; 1TB Western Digital Blue HDD; PSU: Corsair CX750M Semi-modular (80+ bronze); CASE: Thermaltake Commander C36.

 

Secondary rig: CPU: Intel Core i7 4790 @3.60GHz(Turbo @4.00GHz) cooled by Corsair H60; Mem: G.Skill RipJawsX DDR3 16GB(4x4) OC@2133MHz (11-11-11-30 (1.6V)); GPU: None(for now); MOBO: Gigabyte Z87-HD3; Storage:HyperX 120GB; PSU: Thermaltake SmarT series 750w (80+ bronze); CASE: Thermaltake Chaser A41.

Link to post
Share on other sites
Posted · Original PosterOP
On 1/25/2020 at 2:57 AM, Mateus Montemor said:

Im into very competitive CSGO, so i did a lot of research before buying my PC focused into this, so i would suggest to go CL16 and use the 30$ difference to focus on a 240HZ  with 1ms or less response time monitor, im currently running AOC AGON AG251FZ2 and its pretty good. Also, as Source engine is old, nowadays CPU will run the game exactly the same, I have a friend with a 3600x, one with a 9900k, and they get the same Framerates/Frametimes as me. So, as a final awnser, i would say get the CL16 (No input Lag diference from CL16 to CL14 RAM) and focus on the monitor, its the most important thing for CS, alongside the CPU itself...

I have already the best monitor possible for CS:GO, which is the BenQ Zowie XL2546 (144hz tho, at least balanced somewhat the timings), playing competitively with an old "gaming" server that i bought a year ago with Intel Xeon x3430 and Quadro 2000 and 8 Gb 1333Mhz Ram, with an old hp office keyboard and a good looking chinese "rgb" 150g brick mouse barely even moving it, and squeezing the resolutions, settings and every tweak possible to get an decent framerate with pixelated image and yet managed to get the Gold Nova Master almost Master Guardian while all solo queuing. But still as I said can't really say anything about it until I try them myself. I have tried so far 3 PC different builds of the new gens in those internet caffes and the differences were quite amazing coming from low-end hardwares. First build was i5-8500 with GTX 1060, second i7-7700k GTX 1070, and i7-8700 GTX 1060; every single one was great the i5-8500 build was slower obviously than the i7-7700k and i7-8700 while the i7-8700 felt kinda faster than the i7-7700k but overall I still wasn't satisfied enough with all the build and not quite balanced perforamance. Might even be due to the specifcations were kinda of mixed up with different brands/speeds/capacity in every part of the build.


Also about considering the I9-9900KS over the AMD CPUs is about the significant lower timings i9-9900K provides. Which to me matter hugely more than the cores, prices, or anything else. Quality over quantity, is the thing i'm looking for. That's why Intel will always be way ahead than AMD when it comes to pure fast gaming, no matter what kind of architechture AMD will create they always will be behind when it comes to pure fast competitive gaming or even casual gaming. AMD will always remain a try hard and pathetic in my eyes. ;)

And if anyone wants to know about the definition of what "being bottlenecked" brutally on your skills and abillities means, just watch this CS GO Matchmaking demo's "LowLights" of my gameplay carefully and the timings of shooting and being able to react on my perspective and their through my perspective. If you have a single knowledge about timings on such high-paced fps games, you will defintely get what I was talking all about. And no, this wasn't "brutally sweating gameplay" or "psycho try harding game" as this is the least of my tryings and every real pro player will actually undestand just by watching my playstyle. Don't also forget to consider 5 things I have had to face before facing the opponents Pay2Win Huge Advantages who have the Highest End PCs. Starting from the Low-End CPU , GPU, RAM, the full stutters of the "try hard frame rates" , low stretched 1024x768 pixelated resolution where you could barely see anything which this also adds an good amount of delay, the delays of my whole pc system,the cheap brick 150g bloody4 "lighting mouse" with moderate sensor affecting my aim in lots of times, and finally their brutal abuse on their Highest End PCs and Gaming Equipments that I have to face after all those bottlenecks without even using "my really trying" which I don't even want to do it due to the fact that I know they don't even have to give even a little effort from ALL SUCH BIG ADVANTAGES they have, you can just tell already on how pathetic they are, my only competition is my own LAG, nothing else. So all of this should be enough to undersand my whole theory of why the latencies and stable framerates matters hugely more than bunch of high frames AMD can provide and the fact that most of the whole CS GO game who plays Matchmaking are totally Pay2Wins. This goes to 90% of CS GO players who achieve those ranks is due to their PCs merits, while being so bad on their skills are Pay2Wins and not even worth the rank they have. And no wonder when you see real Pros playing with those Global Elite players are actually worse than even silvers, because that's the red line where they can't be carried by their PCs anymore cuz they are facing real skills and reaction times, and not their pay2win advantages over others. ;) 

steam://rungame/730/76561202255233023/+csgo_download_match%20CSGO-oFMi3-wG2at-UJPth-zNhTw-FWzfF


Disclaimer: The following reply, contains highly dangerous level of ignorance due to no professionalism in the specific topic even if it sounds believeable. So reading discretion is strongly advised. Read at your own risk.AIDA2.png

Link to post
Share on other sites
6 hours ago, darKz_here said:

I have already the best monitor possible for CS:GO, which is the BenQ Zowie XL2546 (144hz tho, at least balanced somewhat the timings), playing competitively with an old "gaming" server that i bought a year ago with Intel Xeon x3430 and Quadro 2000 and 8 Gb 1333Mhz Ram, with an old hp office keyboard and a good looking chinese "rgb" 150g brick mouse barely even moving it, and squeezing the resolutions, settings and every tweak possible to get an decent framerate with pixelated image and yet managed to get the Gold Nova Master almost Master Guardian while all solo queuing. But still as I said can't really say anything about it until I try them myself. I have tried so far 3 PC different builds of the new gens in those internet caffes and the differences were quite amazing coming from low-end hardwares. First build was i5-8500 with GTX 1060, second i7-7700k GTX 1070, and i7-8700 GTX 1060; every single one was great the i5-8500 build was slower obviously than the i7-7700k and i7-8700 while the i7-8700 felt kinda faster than the i7-7700k but overall I still wasn't satisfied enough with all the build and not quite balanced perforamance. Might even be due to the specifcations were kinda of mixed up with different brands/speeds/capacity in every part of the build.


Also about considering the I9-9900KS over the AMD CPUs is about the significant lower timings i9-9900K provides. Which to me matter hugely more than the cores, prices, or anything else. Quality over quantity, is the thing i'm looking for. That's why Intel will always be way ahead than AMD when it comes to pure fast gaming, no matter what kind of architechture AMD will create they always will be behind when it comes to pure fast competitive gaming or even casual gaming. AMD will always remain a try hard and pathetic in my eyes. ;)

And if anyone wants to know about the definition of what "being bottlenecked" brutally on your skills and abillities means, just watch this CS GO Matchmaking demo's "LowLights" of my gameplay carefully and the timings of shooting and being able to react on my perspective and their through my perspective. If you have a single knowledge about timings on such high-paced fps games, you will defintely get what I was talking all about. And no, this wasn't "brutally sweating gameplay" or "psycho try harding game" as this is the least of my tryings and every real pro player will actually undestand just by watching my playstyle. Don't also forget to consider 5 things I have had to face before facing the opponents Pay2Win Huge Advantages who have the Highest End PCs. Starting from the Low-End CPU , GPU, RAM, the full stutters of the "try hard frame rates" , low stretched 1024x768 pixelated resolution where you could barely see anything which this also adds an good amount of delay, the delays of my whole pc system,the cheap brick 150g bloody4 "lighting mouse" with moderate sensor affecting my aim in lots of times, and finally their brutal abuse on their Highest End PCs and Gaming Equipments that I have to face after all those bottlenecks without even using "my really trying" which I don't even want to do it due to the fact that I know they don't even have to give even a little effort from ALL SUCH BIG ADVANTAGES they have, you can just tell already on how pathetic they are, my only competition is my own LAG, nothing else. So all of this should be enough to undersand my whole theory of why the latencies and stable framerates matters hugely more than bunch of high frames AMD can provide and the fact that most of the whole CS GO game who plays Matchmaking are totally Pay2Wins. This goes to 90% of CS GO players who achieve those ranks is due to their PCs merits, while being so bad on their skills are Pay2Wins and not even worth the rank they have. And no wonder when you see real Pros playing with those Global Elite players are actually worse than even silvers, because that's the red line where they can't be carried by their PCs anymore cuz they are facing real skills and reaction times, and not their pay2win advantages over others. ;) 

steam://rungame/730/76561202255233023/+csgo_download_match%20CSGO-oFMi3-wG2at-UJPth-zNhTw-FWzfFy

You are looking to deep into it, even pros dont do that, Just look at csgo pro settings, most still uses 1080's/1080 ti's. As i said, the main difference is in the monitor and peripherals that need to be ultra low lantency/ high refresh rate and thats why CS pros dont even understand that much about PC Hardware, but they do a lot of research on mices, keyboards, monitors...

Another example, ESL used to use i7's 9700k for their Championships last year

 

Also, dont use Aida/other hardware benchmarks as your main gaming reference, because of course a memory specific test will show differences between high end systems, but dont expect a old ass engine like source to benefit from that

 

Bottom of the line is, anything with a 6 core and great IPC (ryzen3600/i59600k and above) will run CS almost exactly the same (FPS and stability wise). The advantage of a i7 or i9 is running other things in the background, like Discord or faceit/esea anticheats

 

And just a heads up, CSGO matchmaking servers are awful, 64 tick and all their shit variations will be way more notocible than CAS latency of your system RAM. If you are really into the competitive scene, i suggest u look into faceit/esea or whatever is stronger in your region, here GamersClub is the best for example...

 


Main rig: CPU: Ryzen 7 3700X cooled by a Noctua NH-U14S; Mem: 16 GB(2x8) G.Skill TridentZ White 3200 MHz; GPU: EVGA RTX 2070 XC; MOBO: MSI B450 Tomahawk; Storage: XPG Spectrix S40G M.2 512GB SSD; Kingston A400 480GB SSD; 1TB Western Digital Blue HDD; PSU: Corsair CX750M Semi-modular (80+ bronze); CASE: Thermaltake Commander C36.

 

Secondary rig: CPU: Intel Core i7 4790 @3.60GHz(Turbo @4.00GHz) cooled by Corsair H60; Mem: G.Skill RipJawsX DDR3 16GB(4x4) OC@2133MHz (11-11-11-30 (1.6V)); GPU: None(for now); MOBO: Gigabyte Z87-HD3; Storage:HyperX 120GB; PSU: Thermaltake SmarT series 750w (80+ bronze); CASE: Thermaltake Chaser A41.

Link to post
Share on other sites
On 1/24/2020 at 11:50 AM, Zando Bob said:

Yes, it's just less noticeable in CPU only benchmarks on Intel. AMD CPU cores talk to each other using Infinity Fabric, which is closely related to RAM speed and timings, so it gets a much more noticeable boost in CPU benches from faster RAM. Intel is on a different arch (ringbus, and mesh on X299 chips). Not that the gains aren't there at all, they just aren't as noticeable in CPU only benches. In overall system performance, faster RAM has never been a negative AFAIK? Pretty much always better for basically anything. On something like DDR3 this becomes very noticeable, with my old X58 stuff, even the Windows 10 start menu is noticeably snappier when I've tweaked the RAM to run faster with the tightest timings I can get (about 2100Mhz CL10-11-11-31, up from a stock of 1600Mhz CL9-9-9-whatever it auto decides on). 

AMD Cpu cores do not talk to each other using Infinity Fabric. What are you talking about?

 

Infinity Fabric Is another hyped up term known as Hyper Transport technology, however with some changes added features and such, AMD decided to Fancify the naming scheme. This interconnects Cpu to North Bridge and components within is directed communication to Memory and Cpu and thus where you find your Ram speed matches your NB speed in Cpu-z. 

 

It is not necessary to Unlink the IF until you get around 3800mhz where most 3000 series Ryzen processors start loosing stability.

It has been shown that having IF linked to the memory yields best results.

 

You can't even begin trying to compare an Intel IMC to AMDs. 2 totally different creatures, so we shouldn't go there.


 

Lid-less PGA 2700x / ROG B450-I Gaming / Corsair 3000mhz SK Hynix / RTX 2060 / EVGA 750w

Lid-less 8700K / Maximus X Hero / G.Skill 4266mhz B-Die / RTX 2060 / Antec 1000w CP series

Ryzen Athlon 220ge with Vega Graphics / Asus Prime B450M-A / Corsair LED 3000mhz / 550w Antec Office PC.

DFI LanParty UT / Opteron 148 / DDR Corsair XMS Expert / X1800 XT 256MB Bios modded

Asus CrossHair Formula IV / Phenom II x4 B97 / Dominator GT 2000mhz. / EVGA GTX 770

Link to post
Share on other sites

* thread cleaned *

 

Please avoid e-dick measuring contests.


If you need help with your forum account, please use the Forum Support form !

 

VPN server guide

Guide to run any software as Admin

NiceHash Mining Guide

Ethereum Mining Guide

Spoiler

My Gaming Rig - Motherboard: MSI Z370-A PRO CPU: i7-8700 RAM: 32GB DDR4 2400(4x8GB) GPU: Gigabyte GTX 1060 3GB OS SSD: 240GB Avexir E100 Storage: 2x 1TB Seagate PSU: Seasonic G650 OS: Windows 10 Pro 64bits Monitor: Acer 21in G205H + Lenovo 21in

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×