Jump to content

TechMasterMind

Member
  • Posts

    305
  • Joined

  • Last visited

Everything posted by TechMasterMind

  1. This is a typical example for the images I'm trying to recover: https://imgur.com/a/cuc6nys This has to be the file system being broken right? As you can see soooo many files in a row dont work but there are plenty of working files recovered. But do I fix this with software or what? I dont care if the videos and images are garbled or have frames/audio missing I just need something back there are some of these I CANNOT lose in their entirity.
  2. Here is my list thus far: SSD 0.5-2.5w min-avg Wireless card ~2+2w (wifi+BT) Screen 1-5w (depends on; hz, nits, contrast, light mode...ness) PSU ~15-20% multiplier e.g. my usb-c wall charger is slightly less efficient than my brick - could probably shop around for the most efficient one? *CPU 4-20w (lowest I can get it) GPU 15-150w underclocked/volted > max (can be convinced to basically use nothing but I don't believe a dedicated GPU can be turned off) *CPU: iGPU + SoC (haven't been able to reduce as of yet) Mobo/ports/?Vrms? - ?!??watts *RAM ~2w per 8gb ?avg? *Fans (RPM, no. of fans, ?quality/efficiency?) - obviously if you can get a passive laptop this is ideal but you cannot get a passive laptop without unplugging the fans I'd expect ~10w of cooling when removed which isn't enough even for such a detuned system. Battery?!? I'm sure with direct dc charging, custom circuits, one that can do more cycles, a battery with lower internal resistance or some REALLY DIY stuff like that you could make things ~25% more efficient but I'll just leave this here to say "bigger=better" as it is of course a factor in battery life. Perhaps someone with better knowledge around .bat files than me can find a way to manually set power modes i.e. the lowest S0,S1 etc. sleep state when idle or in use to prevent the ssd from drawing 6w under load (as some like myself will not mind the small latency disadvantages as long as it isn't as bad/worse than a harddrive) https://learn.microsoft.com/en-us/windows-hardware/design/component-guidelines/power-management-for-storage-hardware-devices-intro ^^^Those with *asterisks means to say that you will probably need a compliant bios to get power usage down i.e. not many laptops let you undervolt/clock your ram > I imagine one of samsung/sk hynix/micron will be the most efficient ram chips historically it seems to have been samsung but I have no idea for ddr5 and finding which vendor makes a given ram stick is difficult but we are fighting for each and every tenth of a watt here as if each component or even each individual square of ram or nand uses 0.1w more it'd be watts in power. Mainly though the CPU/SoC is what I'd most love to undervolt and mess around with but even soldering in your own GDDR6 could have benefits XD although that might be a bit much risk. With all of these my system still consumes 8w at idle and 16w under load ~13w minimum avg. i.e. CPU+GPU only account for ~1/2 of my consumption with no browser tabs loaded, fans barely running and no wireless traffic or drive reads. And before anyone ignores the point of the post and gives the least helpful comment ever about any individual component not mattering - yes this is something that has real world use cases other than the envoironment and just wanting to which are good enough imo -- I live in off battery/solar in a van > I need to get my laptop under 200wh per day 16*15 = 240 you see the stuggle here? that's more than my fridge uses and I don't want to have to choose my phone instead of my laptop to prevent perpertually rotting food. Does anyone have any input on how I could further reduce the power consumption (not CPU/GPU tricks for power reduction as that's been done to death)?
  3. For reference I am basing a lot of my understanding off of this article from nvidia: https://www.nvidia.com/en-us/geforce/guides/system-latency-optimization-guide/ I am trying to get my frametimes and latency to be as consistent as possible, however I am lacking for understanding, I know that programs like RTSS can produce 1% lows closer to the actual requested framerate cap e.g. ingame gives me for a 200 requested cap gives me 170 1% lows according to nvidias performance monitor whereas RTSS gives me about 185 1% lows or even 199/198 1% lows if I use their scanline sync from any value >=1 - problem being in order to know if this is actually showing me frames at more consistent times or just making a graph look nice and smooth I need more understanding, so let me go through first the pipeline as I understand it no render queue. So far as I gather purely considering the CPU, GPU then monitor in the pipeline, no operating system process queues, no render queue/pipelines, no mouse or anything: To give a timeline to reference to I'm going to say when the mouse is polled and the CPU starts to simulate the game I will count that as 0ms - these will purely be example numbers to make sure I understand and so my misunderstandings are clear enough to be spotted. Most of the trouble is in (3) so please look more carefully there. 0ms 1) CPU a) simulates what happens in game b) at 1ms creates a render submission for the GPU +2ms [let's say (2) happens after 2ms as there was sometime needed before rendering for (a) simulating the game and sending things to the server etc. took about a milisecond and then the GPU also had to wait atleast some amount of time for (b) which lets say took 1ms.] 2ms 2) I think without a render queue this is just render time (the nvidia article also mention "composite"?), let's say if the CPU and GPU are relatively similar paced each being able to produce 500fps this is also 2ms. +2ms 4ms 3) the new rendered frame is sent to the monitors memory (I think this is what the nvidia graph called "scanout"?) and I think the full scan takes the 1 refresh cycles as it progresses down the monitor the lets say we have a 240hz monitor it'd be 4ms let's call that scenario a) where it doesn't care whether a frame is currently being drawn the current frame being scanned to the monitor just continues with the new frame. So scenario a) is tearing and lets have b) be VRR/no tearing/g-sync. So in scenario b) I think the monitor basically waits for the next full frame so it just stays unrefreshed until a new frame replaces it but since we're running 2ms frametimes (500fps) the monitor is stuck running at its max refresh rate as it will just always except the latest frame, I think the only difference from no VRR being it will be a full frame with no tearing? Or is it not different at all or does it not actually use g-sync but instead v-sync suddenly? I've heard a lot of conflicting info, this part is a massive knowledge gap for me. I think it still stores the frame it just doesn't matter that much so no v-sync or anything just producing the latest frame IN FULL - hence as it will always have been within the last 0-2ms at 500fps IF IIIIFFF, I properly understand and that is how it works. ^But I think to properly cover this and make sure I have a good grasp on it let's also make a quick scenario c) for if say the CPU and GPU time are both 5ms lets say its slightly CPU bound for sake of argument, still 240hz VRR, the monitor waits 5ms between frames so it might be able to output a frame in 4ms but it waits for a full 5ms to scan the frame to the monitor which will then proceed to take an additional 4ms ofcourse so 5ms CPU+GPU then 4ms scanout. - however this is going to have VERY little relative variance IF the frame pacing is consistent - hence why knowing what RTSS/frame cap setttings to dial in matters - but lets say this is the version where you're CPU+GPU times are varying because you're STRUGGLING to get 200fps - 200fps avg. so you're CPU+GPU times are bouncing around 200fps so say 1-2ms variance still but instead of being monitor based its PC based. ^Also it comes to mind that we might need a scenario (d) ones more likely to cap the FPS rather than anything else, so still running at 500fps, we instead cap the framerate to 200fps so 2ms frametimes for both CPU and GPU, I believe this means the game renders for 2ms, then waits 3ms, then renders, then waits etc.? Or more 2ms CPU CPU waits until 5ms total passes, GPU just waits for CPU so GPU sits idle for about 3ms also. So basically your frames will be 3ms out of date as the CPU is sitting idle for 3ms but this also means your frames will be SUPER consistent - yes 3ms extra latency but provided you can pace those frames perfectly or near perfectly which you should be able to given in this scenario we're running at 40% the throughput our system is capable of, we will have basially 0ms variance here! As the montior is waiting ready for the exact moment the GPU delivers it a frame we have CPU 2ms>GPU2ms>Scan>wait due to FPS limit cap1ms>CPU2ms... etc. Given that the monitor is actually waiting on the GPU I think I got this wrong earlier its actually only going to be 2ms, not 3ms after the CPU has finished its work for the monitor to start scanning the frame as once the render has finished THEN you get the wait time but this also means flaws in the frame pacing delay the refresh of the monitor by a tiny bit. But overall only 2ms from theoretical maximum latency and theoretically perfect framepacing if you can find a program to do it? (d) seems like the way to go! 3.5) Then the pixels actually need to switch let's say they switch to something my eyes can recognise as the new image in 2ms +2ms So given (3) scenario a) 6-10ms total "CPU-Photon" latency because some of the frame is split between the top and bottom of the monitor? Because the frame is chopped up so much if you miss what you're reacting to which say is in the middle 100-200 pixels onscreen it'll take another 4ms before you'll actually see the next scan of it. Hence you wait you get 4ms of variance here. (b) gives a bit more consistency at the, depending on your point of veiw ?cost? of the latest frame being sent to the monitor straight away. But in exchange I believe how it works is the frame that is held in the monitor gets replaced by the latest from the GPU which will be with our unrealistically consistent 500fps will be at most 2ms out of date but could also be 0ms out of date meaning you'll get a nice looking more consistent FPS at the cost of TECHNICALLY more latency, as I'm not sure if this adds anymore latency than waiting a full refresh cycle. So actually if g-sync works how I think it does its actually BETTER for really high FPS where many people say its useless above the monitors refresh rate? e.g. if you're running at 1000fps you get 1ms variance and 5,000fps you'd get 0.2ms variance. Although you'd always get that delay of waiting 4ms for the next full frame at 1000fps it'd only ever be 1ms GPU time old and 2ms CPU+GPU time old i.e. the latency just keeps decreasing at higher framerates whereas without g-sync you could always be waiting 4ms on a 240hz monitor to actually see the scanout if you just missed the part of the frame you wanted to respond to. (c) Here you have 3ms latency with variation based on how consistent your CPU/GPU/software is at spitting out frames which is not ideal but still probably less variance than (a) (d) the chosene one - 200fps cap this should add 2ms latency from the theoretical maximum but this should be as consistent as your framerate limiter will allow - hence an RTSS is going to be much much better than your ingame cap most of the time. So to try and summarise (d) seems like the best if you have a framerate <2x your monitors refresh rate for latency but will also give the most consistent framerate given you can find a framerate limiter that is consistent enough and doesn't itself add latency but unfortunately... RTSS and every FPS limiter that is not the internal ones seem to add latency if you want to get more consistency than the internal limiter, I do not actually understand the reason for this trade off, perhaps they are creating a buffer in order to get the consistency? The second best option is (b) which is best if you have VERY high framerates and the absoloute lowest latency is what matters - 5000fps minecraft duels perhaps. Can anyone suggest better ways or programs for getting consistent latency/frametimes or possibly a way I can better collect data because exporting logs to .csv files and using HWinfo64 to somehow graph this stuff all looks like a nightmare to me, if there are tutorials you guys know of I'd be greatful to have a process for testing this stuff. As I know people are prone to comment the most useless things imaginable like "you're overthinking it". This is partly an acedemic exercise partly because I want to just be certain of a consistent experience, and I like tinkering and enjoy getting the maximimum competitively, I don't care how useful it actually is.
  4. Yeah, I am asking what the limitation could possibly be, its certainly not the game engine as the game is capable of running 170+fps with my setup
  5. I mean I wouldnt usually jump to saying its VRAM but I have no idea what else it could be but like the FPS doesnt change all that much when I change any sort of settings? What could it be? Even if consistent it is pinned at around 95% VRAM, what else could it be if its not showin up?
  6. But thats like what a lot of *games* use and theyre making 3d worlds, and linux and mac certainly dont do this
  7. Yeah the problem is they're already on very low and every other settings at minimum aswell, and the resolution is at 50% (even below it with DLSS) so I'm just confused how the game uses ~ 5GB and the OS manages to use 1GB with nothing running?
  8. I does exist, I have it and I also quoted how much was being used. Its a laptop 3060 130watt. (Hence the 5800h.) No its not being limited other than the VRAM by being in a laptop, its definately the VRAM usage thats the issue.
  9. So my copy of F1 22 in VR with the latest drivers and everything was running at just 50fps *even in menus* with everything on minimum (DLSS ultra performance, 50% scaling etc.), with an RTX 3060 - 6GB of video memory, R5 5800h and 32GB of RAM. So I checked task manager to see what was limitting the framerate and I saw that the video memory at 5.6/6GB despite only being at 1400x1400 for each eye and 50% scaling and ultra performance DLSS so I was basically running 350x350 per if I've got my maths right, but even if it were 700x700 per eye it would still be way too much VRAM usage. But I've done everything on the settings side I can, so I went to windows restarted and closed all programs and background, windows advanced settings to best performance, background wallpaper to solid black aaaaand was still using 0.8GB or VRAM!?!?!?? How the heck am I supposed to reduce VRAM??!? What do I need to do, please help. P.S. Please don't hyper focus on this being a new game or VR I just want to know how to reduce video memory usage, in game and in windows (but as the games at minimum settings I suspect windows is to blame).
  10. Yeah I guess but I feel its important as especially now TN and IPS are similiar prices and TN panels are 100%sRGB and using it head on, so if you only want colours that are *good enough* and then care about speed, like me, you might be tempted to go for a TN, if hypothetically, an IPS *isn't* inherently better at producing colours head on, and a TN *is* inherently faster, then the TN with all the rest of the specs being the same, then the TN would in that sceneraio be the obvious choice. But the thing is, noone seems to genuinely know and hence we don't actually know which the better choice is, which I feel should've been covered at some point by a major tech outlet but I can't find any that has, even the more technical ones.
  11. There are a lot of articles but none of them actually give 1) the reasoning why or 2) why this would make even similarly specced TN panels *inherently* worse. The articles all go on and on about how in general TN monitors are like x and IPS are like y. The best of them will mention very breifly something about whats happening to the crystals or of the monitors on the market, what they the average specifications are but never ever will they talk about *why* that is or why it makes TN *inherently* less colour accurate, even dead on.
  12. Fair enough, I guess I just need to learn more. Yeah I agree TN seem to be dying off, but perhaps they might not ever *quiiite* leave us if some people continue to value solely response times, although the prices are so similar and hence there aren't really distinctions on the spec sheet for similar price points. The only distinction will be IPS vs TN, which, if the rest of the specs are reflective of the monitor, wouldn't make a difference but hypothetically, before making this post, I realised there could be differences that cannot be translated to a spec sheet - even if I don't know what twisting vs rotating crystals would make.
  13. Right but *why* does twisting vs rotating the crystals make one *inherently* worse? Noone explains that. They only talk about *in general* one monitor type will be worse than another?
  14. OK, I mean, I can't really look at every monitor individually but I get the impression from your answer that better colour specs will be reflective of a better monitor? Even if it is TN vs IPS? And I mean, TN panels could be faster, so many people that *just* care about that would go for a TN.
  15. I'm not talking about the average TN vs IPS monitor, I am talking about TN and IPS monitors with the same specifications, I would need a more imperical difference I could point to before completely dismissing IPS panels, rather than simply peoples anecdotal experiences which will almost certainly be will wildly differently specced IPS and TN panels.
  16. Well I don't need a TN panel to be more accurate than the most accurate IPS panel, I just want 100% sRGB which TN panels can certainly do. Unless there is some other way of measuring how "good" the colours are on a monitor? So I would be interested in what exactly is this hardware limit is? What causes TN panels to not be able to produce colours as well as a similarly colour accurate IPS monitor?
  17. OK, on what you do seem to know though, how would a colour have more "vibrance" on one monitor to another? Surely, when veiwing straight on, they are both getting the same freqencies to your eyes if they are reproducing the colours as accurately as each other? Could you describe further what you mean?
  18. So anyone can tell you IPS monitors *generally* have "better colours" compared to TN but what exactly does that mean? Like if I find an IPS monitor that covers 98% of sRGB and a TN that does the same, are they equal in that respect? Or will the IPS still somehow be better? And in the case of a TN panel - if the TN panel monitor has a 1ms GTG response time and the IPS monitor has the same rating, are they equal in that respect or is a TN panel still going to be quicker? Basically, I want a monitor that is fast (priority) but is also as close to covering 100% of sRGB as possible and if an IPS monitor can actually genuinely reach 1ms without ultra low motion blur or anything and both monitors are 165hz, same resolution, freesync etc. then I'll take the more colour accurate one. However I just don't know if there is more to having "better colours" or a "faster panel" than colour accuracy and refresh rate and response time (or atleast the GTG response time on the spec sheet). Edit: people seem not to understand this post: *ALL ELSE BEING EQUAL* will the exact same specced IPS and TN panels be different in colours or response time and please please do not tell me "I will know the difference when I see it". I don't want anecdotes from people who have almost certainly been looking at wildly differently specced TN and IPS monitors, I want to know imperically what difference is causing TN panels to lag behind IPS monitors even with the same specifications if they infact do lag behind.
  19. So TL;DR the faster memory in this instance as I have 3200mhz already its more likely to work? And the CL15 will be a memory profile?
  20. So I'm looking at getting some memory for my laptop. 2x8gb kits 2x8 kits, but one is quoted as CL15 2666mhz and one is quoted as CL22 3200mhz - from looking at benchmarks it seems that for every drop of 1 CL you get about 1% performance and a similar 7-8ish% jump for 2666-3200. However, my laptop is currently running 3200mhz (8gb hence the upgrade) and so I would expect 3200mhz would run fine. But I'm not sure if the CL15 kit will be able to do CL15 out of the box. As I'm not aware of anyway I could lower the timings on my laptop if the timings would not come out of the box at CL15 they would stay that way and I do not want to but the CL15 kit and realise in a few days time on CPU-z its CL20 or something and return it. Can anyone give me some clarification? Thanks a million in advance for your help! : 3
  21. Gone stick with ma current laptop for a while longer then, alright, atleast it will come c: ...right?
  22. So I have USB-C pretty much everything and I love it, USB-C even has a theoretical capability of 240 watts, however you only ever see laptops with 100watt usb-c charging and 200-250ish watt charging with the proprietary charger. I am planning to upgrade to a usb-c charged laptop as I really dont like travelling with the proprietary charger, the transformer is also always a pain to place down. My plan is to wait to upgrade for either as soon as USB-C 240watt laptops hit or maybe when they hit and then a new CPU/GPU generation comes out. The problem is, how will I know? And could I be waiting 5 years? Will it never happen? Will I have to scan the specifications of the latest laptops time and time again until I see one? Could this even be a fools errand is there a reason why they aren't being implimented right now? This really would be a big upgrade to me even if it seems a bit odd to you so I'd like to say thanks a million in advance for your help! x3
  23. I mean if I lose my phone and my laptop in public I dont care about insurance, I want my data back + I have like a whole setup e.g. battery pack etc in my bag, I will want it back, like being stuck in another country or even a campsite without a tent for instance or maybe my keys is a no go. Just having a GPS tracker would be useful if I carry £2000 worth of stuff + potentially irreplacable items, I will want to find it, even if a tracker costs £100 + peace of mind from this is huge.
  24. a) loss also exists, b) if you confront someone over something like this and they are anywhere but their house I'd say 9/10 times you'd be successful especially if you can draw people to deliberate
×