Jump to content

Mira Yurizaki

Member
  • Posts

    20,911
  • Joined

  • Last visited

Everything posted by Mira Yurizaki

  1. You can also download the Standard Drivers instead of the DCH ones, which requires the Control Panel to be installed from the Microsoft Store, from https://www.nvidia.com/Download/Find.aspx?lang=en-us.
  2. USB 2.0 can support up to 480 Mbps, but in practice after overhead this is more like up to 400 Mbps.
  3. That's also a lot of money to spend on a Switch
  4. Will concur with the others that your CPU is still good enough due to not running applications that run better with more cores. This will especially be the case if you're trying to compare scores through something like 3DMark (though your CPU based scores will definitely improve) or Unigine.
  5. On an aside, HWiNFO recently went to an "averaging" method described at https://www.hwinfo.com/forum/threads/effective-clock-vs-instant-discrete-clock.5958/ They also mentioned that AMD has their own proprietary way of measuring clock speed in Ryzen Master. I'm certain that Intel has their own proprietary way. All tools like Task Manager, HWiNFO, and others can do is poll certain things that the CPU exposes, average it out over time or report the value at that point in time, and hope for the best.
  6. There's several ways to measure clock speed, clock speed is constantly changing, there's only one number being reported and there's multiple cores that can be independently clocked, and Task Manager is polling at a rate of 1Hz, which raises questions about how accurate this really is. I don't think Task Manager is broken per se. The system just isn't feeding it reliable data or it's sourcing from something that's reliable enough under default conditions. I mean, for example, my CPU is set to a flat 100MHz x 40. No utility reports it as operating at 4.0GHz 100% of the time.
  7. I'm going to wager that "reason for banning" isn't considered personal data under the CCPA. Besides, any data covered by CCPA must have the option to be removed and well, if you can make a company remove the "reason for banning" information, that opens up a can of worms.
  8. You don't want a condition where the computer is constantly swapping. However, having storage allocated to virtual memory space allows you to have less RAM than you would actually need to contain what you need virtual memory space wise. There's a balancing act between having too little RAM and just enough so you're not in a constant-swapping condition. Otherwise, you're going to need a lot more physical RAM than you actually would use if you want to run without allocating storage space and without running into an issue where you've ran out of virtual memory space, which doesn't imply all of it is actually being used. In Windows it's in Control Panel -> System and Security -> System -> "Advanced system settings" on the left pane -> "Settings" button in the "Advanced" tab > Performance section -> Advanced tab -> Virtual Memory I don't know what it is in Linux. However I'd advise to leave these settings alone. If you're not having a problem, there's nothing to fix.
  9. Response time is a practically useless metric. The only difference between the two is likely what panels they use, what refresh rate they support, how many options you have to play with in the on-screen display menu, any other features like FreeSync it has, and how adjustable it is.
  10. I'll see if I can explain this in a clear manner. Way back when, RAM was expensive but people wanted to run more and more apps. To help alleviate an issue with not having enough RAM, OSes started to store data from RAM on the storage drive if it wasn't being used. The action of moving data back and forth between RAM and storage is called "swapping." This is useful in some cases, but if there's a constant amount of swapping going on, the system will start coming to a crawl as transferring data to and from storage is exceptionally slow. To support this feature, the concept of "virtual memory space" was created wherein the OS can make applications believe there's more memory than what's physically available. This helps simplify the app development process now because the apps don't have to know how much physical memory (RAM) is available. The OS knows and all the application has to do is make requests from "virtual memory". The OS handles translating these requests from virtual memory space to physical memory space. In Windows (Linux and others may be the same), virtual memory space is basically how much RAM you have plus how much in storage was allocated to swapping (this shows up in C:\ as a file named pagefile.sys). Of note, you can get away with running your computer without storage space allocated to swapping, but this presents a few problems: Some applications actually refuse to run if no storage space was allocated When applications want a chunk in memory space, the OS will often give the application more than what was requested. The idea is that the application will likely want more memory in the future and so it's to prevent the application's data from being all over the place in memory space. However, until the application uses it, the OS won't mark the memory as "in-use", only reserved. The problem with not having storage space allocated is when applications made enough reservations that it takes up all of the physical RAM. So the next time an application wants more memory, the OS goes "sorry, can't do that" and the application likely throws an "out of memory" error. When there's storage space allocated, the OS can shuffle the reserved portions to storage without much of a hiccup since there's no data on them to begin with. If you look at Windows' Task Manager -> "Performance" tab -> Memory page, if you look at the "Committed" value, that tells you how big your system's virtual memory space is and it tells you the total of how much memory all the applications have at least reserved. EDIT: tl;dr OSes allocate space in storage to dump stuff from RAM into if the data isn't being used. Swapping is transferring data in and out.
  11. My rule has been for the past 15 or so years is install only the drivers. If the additional software that comes with the drivers have something you'll actually use or, god forbid, is required to make the thing work, then install that as well. Like I never install GeForce Experience with my NVIDIA drivers, because it provides me with zero functionality I care about. But I have a Sound Blaster card that has software that's needed to configure it, so I have that installed. I also always skip motherboard utilities because all of that can be configured in the UEFI config anyway. But as far as "auto installing" goes, I almost never see that. But then again, I'm careful about going through the install process to make sure I'm getting exactly what I want, rather than click on "Next" mindlessly.
  12. I would argue a lot of what makes Windows feel bloaty is that a lot of extra things are turned on by default. Windows feels a lot snappier when you remove all of the UI glitz and tweak with it a bit. Like literally the only difference between Lubuntu and Ubuntu is Lubuntu uses a lighter-weight desktop environment and a different set of userland applications (presumably lighter weight versions). Otherwise it's the same as Ubuntu, yet Ubuntu recommends much higher system requirements.
  13. You didn't mention what currency or location, so that would've been nice to know.
  14. Mira Yurizaki

    Aight, i know liquid screen protectors aren't a…

    So what you're saying is... It's a scratch repairer :3 Might consider getting some to make some buttons I bought that got scuffed up shiny again.
  15. Is it new or used? And if it's used, does it come with warranty still or no? Because I don't think $250 is a fair price for a used, out of warranty 1080.
  16. Or not jack up the brightness on the display.
  17. PSY is set for life because he's a popular K-Pop artist with multiple albums. Not because he made a single song that broke a record on YouTube. Notch is set for life because he worked on Minecraft for years to constantly improve it even after releasing it as a public alpha, not because he released a single version of it and sat on his ass all day afterwards. You don't get 'set for life' by releasing one thing and be done with it. Also, again, luck plays a role. I'm sure PSY had plenty of songs nobody cared about before Gangnam Style (at least outside of South Korea who already isn't into K-Pop) or Notch had plenty of programming projects that went no where. Or if we were to poke at another success story, Toby Fox and Undertale. That wasn't his first programming/game project. That wasn't even his first creative work. And likely the only reason why Undertale took off is because Fox was already involved in something prominently that had a large following. These people didn't just poop out a single thing on their first try and suddenly got rich and famous. They had to work for it. I mean sure, you can have cases like Flappy Wings where some kid (likely as an early project) cobbled together some 2D assets, applied some basic physics, and got a revenue stream from it because of how popular it got, but the odds of that happening to you is like winning the lottery. If you think this doesn't make learning how to program worthwhile, you're doing it for the wrong reasons. If you want to make passive revenue streams, get yourself into investing. Not programming.
  18. You may have been on some special plan and it ran out.
  19. If you're only in the game to make money, I'd argue you're in it for the wrong reasons. Also, it takes a massive amount of luck to even get popular enough. Even if you're at that point, that thing you created isn't going to set you for life. You have to keep making more things that people want. PSY isn't set for life just because Gangnam Style got so damn popular (in fact, I think the viewership basically leveled off). Or the alternative is take the money you made and throw it into something that will grow. Either way, if you want to make money, you're going to have to work at it.
  20. @PianoPlayer88Key That reminds me, I recall when I was building and setting up my Windows 98 machine, it has a Pentium III running at 800 MHz, PC-133 RAM (256MB I think), and a 7,200 RPM drive. So probably upper high-end for that time period. But holy moly, it's freaking fast. Thing boots up almost as fast, if not faster, than any of my modern machines. Regarding the VRAM thing you were testing, I found an app that allocates VRAM and forever-waits to see what happens when I run some games. It does try its best to remain resident in memory, but occasionally Windows does shove some of it out of VRAM. I'd have to go back and play around with it though.
  21. An iGPU technically wins by locality, but I don't think it practically matters in the end.
  22. Mira Yurizaki

    Coworker: What would a font size of -1 look lik…

    YOU FOOLS, YOU'LL CREATE A BLACK HOLE >:O
  23. If we were to put an objective metric, "snappiness" is basically the average input latency of the system. That is, if you press the "A" key on a keyboard and expect text to show up on screen, the faster it does this, the 'snappier' it is. Ironically older systems, like 8-bit computers of the 80s, are actually snappier than modern systems. Why? It's a combination of the following: The OS or system software ran on ROM. So it likely knew exactly where to go each time to grab data. Plus the fact that ROM speed, at least at the time, was relatively very fast. The components, like CPU, memory, sound, and video, were simple. They had a fixed address in memory space (makes it easy for anything to talk directly to it) and were often easy to work with. Basic input devices, like keyboards and controllers, sent data that was literally like a few bytes at most and the application directly received this. This is unlike say a USB based keyboard or controller where the payload may be a few bytes at most, but it also includes many more bytes for the USB protocol, which had to be packaged at the device and unpacked at the OS level, before the app even sees it. tl;dr, making the system simpler makes it spend less time trying to figure everything out, which ends up adding to latency. You could apply those same principles to modern computers, but you can only get so far.
  24. And that's exactly what I'm saying. Any extra frames being rendered for the purposes of having an advantage is mitigated by the fact the server tick rate tends to be slower. The only advantage you get is maybe being able to see something sooner, but the practical perceived difference between 60, 120, and 240 FPS becomes increasingly harder. This is on top of the fact that reaction times are at least a couple of frames at 60 FPS anyway and that something you see sooner may not be enough to positively identify what it is you're shooting, maybe except in a 1v1 match. The only thing that matters is who was aiming at the right spot and pulled the trigger in the time frame the client is supposed to send their snapshot. It doesn't matter who technically shot first, it's happening at the same time as far as the server is concerned. And even then, the server may do lag compensation and adjust the results as necessary. Tick rate is irrelevant to the problem that if you don't submit an action within a time slot of a server tick, your action doesn't matter because something else got to it first. Placebo is a thing that can affect performance.
×