Jump to content

ValkyrieStar

Member
  • Posts

    204
  • Joined

  • Last visited

Awards

This user doesn't have any awards

4 Followers

About ValkyrieStar

  • Birthday Nov 30, 1995

Contact Methods

  • Discord
    ValkyrieStar#3839
  • Steam
    spartan_c001
  • Twitch.tv
    DuckTeamOfficial

Profile Information

  • Gender
    Male
  • Location
    United Kingdom

System

  • CPU
    AMD Ryzen 5 3600
  • Motherboard
    MSI MEG X570 Unify
  • RAM
    32GB DDR4-3200 C16
  • GPU
    RTX 2080 Ti
  • Case
    NZXT H440 Black & Red
  • Storage
    950 PRO 512 GB x3
  • PSU
    EVGA SuperNova P2 850w
  • Display(s)
    ASUS 4K 60Hz x2
  • Cooling
    Thermaltake Water 3.0 Ultimate 360mm
  • Keyboard
    Corsair K63 Red Backlit (CherryMX Red)
  • Mouse
    Corsair Scimitar Pro RGB
  • Sound
    Schiit Fulla 3
  • Operating System
    Windows 10 Pro

Recent Profile Visitors

2,183 profile views
  1. Hi,

     

    I tried it for myself but something goes wrong.

    I get message "file size exceeds the volume size"

    I got the latest rom for asus sabertooth P67 included.

    Can you modify it for me??

     

    kind regard's

    Piet

    SABERTOOTH-P67-ASUS-3602.ROM

  2. A 3900X is basically two 3600X's on one package; 3+3 & 3+3 CPPC Prefered Cores works well for a single program which uses only a couple of threads, they get locked to the fastest cores on the fastest ccx, no latency from them hopping around. You'll get better performance from it having higher boost clocks on those cores. The issue is that as soon as you get into things that use many more threads, or multi tasking with more than one thing running at a time, they all get bunched together and choke up on that single ccx. So for my work, it's best to keep it off. Might see lower single thread performance, but unless you're actually 100%ing a single thread, it's not going to matter. The only time you'll see that is in synthetic benchmarks. My CPU lost a whole 2 points in Cinebench single thread by disabling it. consistent 206 score to now a 204 score. It'll make more of a difference on CPUs where some cores clock significantly higher than others (mine all hit 4.4 except for 2 which hit 4.375 - they will do 4.4 too but only after coaxing them out lol), but even from watching it hop around, it never hops to either of those two slowest cores, windows seems to know what it's doing for once! It didn't affect multi-core score in cinebench at all, still hits 1660-1665, but now I consistently see better utilisation across all cores when doing encodes using ffmpeg etc and they are running faster too, surpassing speeds that the 9900KS managed whilst that was melting the VRM, and coming close to it when the VRM was under control. Now.. what do i have that i can sell to get that ram bought before the offer ends... oh yes.. an i9-9900KS and a motherboard that isn't safe to put it in lmao
  3. It's listed in the QVL for my motherboard for use with 4 modules too, the kit only comes with 2 modules, but 2 kits.. 4 modules.. spicy! For the IMC 3600 with 4 single rank modules should be easier to run than 3200 with 4 dual rank modules on the memory controller. I read dual rank modules are slightly faster than single rank when comparing a single module, but i wonder if that changes when there's two modules per channel rather than one. It's also only gonna push the infinity fabric up to 1800 which most/all chips should be capable of. I don't wanna both with the hassle of trying to get 1900 IF working with a 3800 kit, and i don't really wanna bother messing with timings, so i think this 3600 @ 15-15-15-35 kit should be a perfect match, it's already real tight and the IF should have no problem keeping up. Yep, was sat in a discord call on destiny 2 with a few friends, screensharing the live event for our friend who doesn't have access, at first it was choking up cores 1-3 literally 100% usage across all of them, stuttering the game quite badly, while the other 3 cores pretty much just sat idle. Figured i'd just go brb 2 minutes to change a single bios setting, came back and stuttering was gone both in destiny 2 and on discord, whole pc was more responsive, all cores saw roughly equal load and i even was watching a youtube video on my second monitor whilst the event was taking its jolly time to play out with no issue whatso ever. It feels like i'm on an intel system if i'm honest. Even my ThreadRipper with 16 cores at 4ghz didn't cope this well.
  4. I don't know, it probably helps when only doing 1 thing at a time, but as soon as you're doing more than 1 thing, those 2 or 3 fastest cores start capping out and stuff starts choking up and stuttering. Disabling CPPC stopped my Destiny 2 and CoD MW stuttering completely. With it enabled i'd also get stuttery youtube while running encode or simulations, but with CPPC disabled it's possible to watch youtube without stutters and the whole system feels more responsive in general. Wierd. I've been eyeing up some 3600 @ 15-15-15-35 G.Skill ram, it's 50% more expensive than my current kit. But i guess it'd be a decent investment vs 3200 @ 16-18-18-36. DDR4 isn't going anywhere for a couple years at least, and once i've got everything in this system i won't be upgrading for some time except for GPUs. Plus i've a couple friends that could use my current RAM. Not sure though, opinions?
  5. True lmao, disabling the CPPC feature really help spread the load on Destiny 2 and CoD MW, before they were completely choking up cores 0,1,2 and 3,4,5 were basically idle doing nothing, now all cores are averaging about 4.3ghz and im no longer getting any stutter from it choking up the CPU cores. GSync is doing a great job of smoothing any microstutter (if there is any) from it being across two CCX's. Honestly, for me i have some 32GB of 3200 mhz, it'd be more hassle than it's worth (and more expense) to get more ram. I'll keep a lookout for any offers on some good low CL high mhz kits (maybe 3600 C15 or something). I'd just run them at XMP and call it a day.
  6. I could not touch the heatsink, it would burn my fingers almost instantly, it made my finger feel tingly fuzzy for a good while after lol Got a new board and CPU, all is good now, re-learning how to beat Ryzen into submission to make it do what its supposed to Mosfets themselves are rated to be able to run at up to 125c and will do so generally quite happily (though at a reduced max current output). It's more what damage it's doing to other nearby components, capacitors, inductors, usb controllers, wifi etc. My USB ports would sometimes get flakey and my mouse would flip out when the vrm had been really hot for some time.
  7. Well I cured it of being scared of cores by disabling SMT, generally i run gsync so i cant tell if its improved anything, but stuff is being spread out across all cores now (its still barely tickling the last two cores unless i force it to though) Because of how my encoding work is mostly 1080p, it doesn't scale much beyond 16 threads, and it performs much better with 16 cores vs 8core with smt, so my plan *was* to get the 4950X and disable SMT, but that has the wierd side effect of making sleep mode disappear. Wonder if that's something that'll ever get fixed. I do know Ryzens SMT does scale better than Intels HT due to differing implementations and bigger L3 cache, but windows scheduler just seems to forget that SMT exists and uses both threads of the "faster" cores before using the "slower" cores. My 9900KS would spread stuff out fairly evenly between all the main cores and only after that would it start using the second thread of each core. *edit while still writing* I disabled CPPC Preferred Cores in the BIOS, now it's distributing stuff pretty evenly, it's even preferring to only load up one thread per core which is preferable too. I'll go as far as saying its actually running cooler and using less power (the whole cpu is sitting at about 3ghz in valorant, rather than 3 cores being pegged at 4.3-4.4ghz constantly), and i have all 12 threads again, nice! I guess i'll just be running two encodes at the same time to fully utilise the 4950X.. damn that thing gonna rip! The preferred cores thing supposed to hint to windows to *prefer* using certain "faster" cores first, but it seems windows just dumps everything it can onto those cores and is scared to touch the other horribly slow crippled weaker cores that can only hit 4.375ghz instead of 4.400ghz
  8. FCLK was set correctly (it automatically followed with RAM) Both XMP and manually OCing to 3200 or 3466 both got 1.1v SoC automatically At first i couldn't find the CSM setting, it's a little oddly placed in MSI bioses, but it was 22-25 seconds boot time with that, and about 16.5s set to UEFI only. Memory Fast Boot is supposed to skip some memory checks, (i'm not on about UEFI Fast Boot, which isn't even a thing on Ryzen - at least for MSI), it's generally safe to use unless memory overclocking. I started getting some wierd issues where boot was taking really long, sometimes 10 seconds, sometimes 16, sometimes 22 or even higher, one time it was even 52 seconds, not sure what was happening there, but it wasn't being consistent at all (i left memory fast boot disabled since it didn't seem to make any difference). Once it started being inconsistent, there was nothing i could do that'd make it stop behaving so wierdly. I even tried reflashing the BIOS and doing all settings from fresh, it just wouldn't have it. I went back to the latest non-beta bios which is A30 currently, A42 was the latest beta, and despite the slightly older AGESA, and slightly slower post (11.2s), it's much more consistent and stable. Even powering on the system after it'd been powered off at the wall resulted in the same post time. 10-11s is acceptable enough for me. (my old threadripper board was around 13-14 seconds, which is why i was so confused how it was being sooo slow to boot since it's got like half of the hardware). I'm not sure entirely what windows scheduler is trying to do, but it keeps putting games on the first CCX, hammering the crap out of them (all those 6 threads), and maybe like one or two threads on the second CCX, like, i know it's a different CCX, but surely 6 actual cores with tiny bit of latency is better than 3 cores in a single CCX, even ffmpeg seems a little scared of the last 2 cores lol, since they're all capable of pretty much 4.4ghz i might try poke it into just ignoring the "better" core thing. The last 2 cores (which happen to be the worst, average over 90% of their time in sleep states).
  9. I need to update my system specs, i was running an i9-9900KS, now i'm on an R5 3600 waiting till the 4950X arrives someday.
  10. True, increasing the update rate too much though can cause high cpu usage, 1 second update speed is plenty in my opinion.
  11. I flat out ignore the motherboards "cpu temp" sensor, as it's often inaccurate unless it's reading one of those CPU sensors, in which case, i still ignore it anyway. For Zen2 (Ryzen 3000); CPU (Tctl/Tdie) = a wierd averaged thing, that the SMU ramps up with any spike of Tdie (even ones that aren't picked up by monitoring software), but ramps down much slowly, at idle this is often routine stuff like system logging or whatever, and can lead to annoying as hell fans which ramp up every few seconds, this is the one which produces a sawtooth like graph. CPU CCDx (Tdie) = is the instantaneous hotspot temperature of die "x", on a single die chip there'll only be CCD1, on the 3900X or 3950X there'll be both CCD1 and CCD2, it's updated at something like every 1ms. Watching this can produce a very spikey graph at load loads/idle. CPU Die (average) = a moving average temperature (not sure what kind of timescale it's averaged over, maybe 500ms or something) which produces a much more reasonable and readable graph and would also be a way better thing to base the fan speeds on rather than Tctl, but unfortunately that's not something we can set in the BIOS. It produces a graph similar to CPU Package on Intel. As you see, CPU (Tctl) is producing a constant sawtooth graph, which makes for a really f*cking annoying fan ramping noise when just chilling watching youtube unless you manually tweak away from that ramping range. CPU CCD1 (Tdie) is the hotspot measured at that moment (since it updates so fast, HWinfo "misses" many of the spikes here). CPU Die (average) is an averaged reading of the Tdie. Ryzen Master actually uses the CPU Die (average) reading. The other temperature sensors you mention are indeed, Intel or older AMD ones.
  12. I'd go ahead and say something's probably up with that. It's probably conflicting with something. One time i had FaceIT anticheat for non-official csgo matchmaking, that used to make boot times painfully slow and the whole system rather sluggish. As soon as i removed that boot times were back to a couple seconds, rather than a 10+ second wait before the login screen.
  13. Yep, it sits nicely at about 4.1ghz consuming about 100w, actual temps are a little on the higher than expected side tbh, at around 72-75c ish. Full bore prime95 doesn't push it much over 85c though, at about 4.05ghz and 130w (1.27v), this particular chip seems to be quite power hungry from the looks of things, probably why it ended up with no X at the end of its name. I don't feel like i'll be able to get an all-core of more than 4.2ghz (i reckon that's as far as i'd get with about 1.25-1.275v being the max it'll take before being too hot), and in which case i'd be sacrificing the 4.4ghz lighter load speeds (i did a quick test of an overclock though, since there's no P-States, it just sits there ramming voltage into the chip) in favour for slightly better all core load speeds, despite not caring about this CPU a whole lot, i'd rather let it do it's own thing. I spent a few hours yesterday tweaking memory, i managed to get it from the stock 16-18-18-36 @ 3200 to a nice 14-16-16-32 @ 3466 with no additional voltage, performance in benches was unaltered, and i was met with some wierd instability that memtest86, prime95 large ffts, hci memtest all failed to find, this instability wouldn't go away when dropping it to 14-16-16-32 @ 3200 or by increasing voltages, so i bit the bullet and ditched that and just went back to XMP rated timings, which by chance randomly started scoring better than they did before.. The only main gripe i have/had/ish with this board (and it seems with AMD boards in general) is that the POST times can send you to sleep. Default was something like 24 seconds, which was just absolutely painfully slow, memory fast boot does literally nothing (idk why, i tested cold boot, reboot, and warm boot and memory fast boot didn't affect it at all). I eventually got it down to around 16.5 seconds by disabling CSM, disabling the wifi, and disabling audio which i don't use (i have usb external card), so that was a nice improvement. I might email MSI see if they have any comment about why Memory Fast Boot isn't actually doing anything. I figured i'd try setting the PCIe gen to gen3 for all the slots since i have no 4.0 devices, and that dropped post time to 10.6 seconds, very nice! Then i realised this also included the cpu-chipset link, and my two NVMe drives were being bottlenecked a little, i set the chipset up to gen4 and the post time didn't change at all, i then set all the settings back up to gen4 (rather than auto) and it maintained the nice 10.6s boot time, wierd. On a side note, don't enable the "pcie 4gb address mode crypto" rubbish, it increased post times by like 5 seconds and had some other non favourable side effects. I'll have a go with tweaking some other bits and bobs at some point but i'm not sure what else there even is to tweak to improve post times. I'm just used to pressing the power button on my Z390/9900KS and about 4 seconds later i was at the login screen, post times in that were 2.7s, and windows fast boot bringing up the login screen almost immediately after that. 4950X gonna rip through some work really nice i bet...
  14. Well, i did some more tweaking here and there, finally got the fan profile setup in a manner where it's not yo-yo-ing, i suppose, when i get round to custom looping the whole system later this year (once i get a nice gpu and 4950X for it), i'll be setting it based off water temperature vs ambient temperature, so it's a workaround for now. For temp monitoring i've found that; CPU CCDx (Tdie) is the spontaneous measurement of each die (im presuming it's hotspot temp), CPU Die (average) reading is a smoothened out version of much the same, CPU (Tctl) is just some mad yoyo wierd ass thing that some derp coded and they forgot to fix ever. So for now i'm sticking with the CPU Die (average), as that's easiest to read, while it'd be nice if i could base the fan&pump speed off that, it'll have to do as it is. While playing games and encoding (which sustains a better avg fps with less stutter than the intel system), the VRM hit an absolutely insanely meltingly high temperature of 57c (/s), pretty sure most of that was the GPU heating up the case, as now i've finished gaming and the encoding is still running (with a fairly warm room) it's now sitting happily at 45c VRM. My encodes which usually finish with an average at or below 4.5fps, i've been hovering around 4.0 fps the past two encodes i've cared to check up on. Not a huge drop, and it'll certainly pick up a few once the 4950X gets dropped in With PBO disabled and AutoOC off it hovers around the 3.9-3.95GHz mark, PBO limits disabled about 4.0GHz, then AutoOC also set to +200MHz gets about 4.1GHz avarage. With lighter threaded loads, all cores can achieve 4.2GHz with no AutoOC, and 4 of 6 cores hit 4.4GHz with AutoOC to +200MHz, the last two cores not far behind either. I must not have gotten too bad a sample to get that big of a boost with AutoOC. Single core perf on Cinebench R15 is almost on par with the 9900K/KS with 209, multithreaded isn't too far off either, with 1660.
  15. It's stock, well, except for the power limits disabled. I did have the current limit set at 193A but occasionally it did seem to like to spike above that (i've seen as high as 211A) Yeah, it's the Z390 Phantom Gaming 9, full size ATX board, i've looked into it and it has just about the worse VRM possible for its price point, even the far cheaper MSI Z390 Gaming Edge AC fared better, for the most part hovering around the 105c with the 9900K (which pulled a similar current but more power at 5ghz), only occasionally getting too hot if it was a particularly hot day. My X570 ACE, after a day with the stock cooler, i was hitting VRM temperatures in the mid-high 50s, and the CPU loved to constantly run at 95c when playing valorant, thankfully the AIO bracket arrived today though, and even in synthetic loads the CPU only hits low 70s (with precision boost limits set to the motherboard design limits - the 3600 isn't getting anywhere close to them lol), and AutoOC enabled, it's hitting 4.4ghz on 4 out of the 6 cores, and 4.35ghz on the two worst cores. I'd bet a 4.4ghz should be easily possible. At some point i'll get round to locking in a nice OC with it. Now i've got the AIO bracket, those high 50s VRM temperatures are now high 30s VRM temperatures, i'm guessing the CPU cooler was heating the VRM more than the current draw lol. Idle temps are now a heck of a lot lower, ~28c vs like ~45c before, well, if you look at the actual CCD temperature anyway, the Tctl is going completely mad still; I was going to lock in a nice P-State OC, but it seems like P-State OC'ing is a thing which doesn't exist anymore, shame, it was a nice way to have decent idle temps and a high all-core OC, it would even run at intermediate clocks if it didn't need it, and that stupid Tctl bug wasn't there either.
×