Jump to content

nerd866

Member
  • Posts

    45
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Location
    Alberta

System

  • CPU
    9900k @ 5Ghz
  • Motherboard
    ASUS z390 Prime P
  • RAM
    128GB Corsair Vengeance
  • GPU
    EVGA 2060
  • Case
    Thermaltake view31
  • Storage
    NVME Samsung 970 1TB, SATA Samsung 960 evo 1TB, 2x Crucial MX500 500GB, 2TB 7200RPM
  • PSU
    Cooler Master 750w 80 plus Gold
  • Display(s)
    ROG Swift 27" 1440p @ 165 Hz, 3x BenQ 1080p @ 60Hz
  • Cooling
    Noctua Nh-D14
  • Keyboard
    HyperX Mechanical
  • Mouse
    Logitech g502
  • Sound
    M-Audio FastTrack C600 USB
  • Operating System
    Windows 10

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

nerd866's Achievements

  1. Yeah I know, and it only gets worse as density goes up. Not to mention the time it takes to run stability tests, ugh.
  2. I'm a heavy fl studio user. Regarding the 64gb of RAM, I had that in my old 3930k (6c 12t) and I was hitting cpu and ram bottlenecks at about the same time so it was a pretty good pairing. I do a lot of sample work with orchestral stuff which eats up ram. If your workflow doesn't demand crazy ram usage, I'd go 32gb. The 1500x won't likely outperform my 3930k and it was pushed to the limit in 64gb projects. I'd go 32.
  3. I really appreciate the insights everyone, thanks! On a side note, attempting to OC 128GB of ram flipping sucks. Haha
  4. I wouldn't expect a (especially a non-ECC) ram module to never have an error, but I'm curious how often one should expect an error, and what happens to this error rate as density increases. In other words, would I expect 1x4gb of RAM to have fewer errors than 2x8gb, which would have fewer errors than 4x32gb? How many errors per day/month/year/whatever would a 4gb module get typically? What about 4x32gb? Is there a point where buying non-ECC is a bad idea on the grounds that it will hurt system stability / data integrity compared to a system with less RAM? Obviously ECC makes more sense in some use cases. What I'm wondering is whether there is a measurable decrease in data integrity in non ECC as you go up in capacity.
  5. Hey! Now that 32gb DIMMs are available to the masses, how about an overclocking guide for 128gb of RAM in a consumer motherboard?
  6. Nope definitely not. My other workstation hardware (monitors, MIDI keyboard, studio speakers) takes up so much space that it's just not possible. I'm already using a very large desk and no matter what I do never seem to have quite enough space, even as-is. No plans to add a PC case to the mix, too.
  7. I'd probably go with Mac Pros. I don't think there's a force in the world that could get me to justify those things.
  8. I had to smack a computer multiple times to get it to boot once. Hard drive was almost dead and it would freeze when booting up windows. If you smacked it, the mechanical hard drive would jerk or something I guess and it would be able to load a bit more data. Repeating this enough times eventually let me back up the hard drive and replace it. Those were more nieve days before I had robust backup solutions. At least I could take my frustrations out and be productive at the same time, so that was pretty cool.
  9. Not too bad but I currently have a case fan super-glued to my CPU cooler because it technically doesn't support 2 fans...It does now! Oh, also...I may be balancing my middle monitor on a piece of cardboard.
  10. Back in the day (was a PC gamer in the 90s), the big thing was being able to crank up the resolution and detail settings as high as possible. This was before dynamic lights, 4k textures and crazy-impressive shadow effects. "Max settings" typically meant not-terrible textures and more flashy, sparkly things on power-ups and gunfire. Running a game at max settings today, assuming it's a modern game, shoots the system requirements into the stratosphere. At 1080p, it seems like max texture quality has very little impact on visuals. Setting shadows to "ultra" rather than "medium" or "high" costs a lot of system resources for not much visual fidelity. Setting Texture Filtering in nVidia / AMD control panel to "High Quality", rather than "Quality", seems to cost FPS without much tangible gain, either. It could just be me, but I tend to use the following kinds of settings in modern games: Medium / High texture quality Low / medium shadows 4x AA 8x Anisotropic Filtering No motion blur / Distance blur / etc. Limited post-processing Do most of you tweak nVidia / AMD control panel settings or just leave it all at default? I've found those settings have a profound impact on visual fidelity and performance so I spend a bit of time in there. Additionally, people who game at 144 Hz or higher, what settings do you use in modern games? Do you consider it feasible to run games at High / Max settings, or do you prefer low-medium settings?
  11. Of course, true future-proofing is impossible, so I tend to use a definition kind of like this: Is this computer capable of doing its primary task for more than 5 years with minimal upgrading and, after that, will this computer be powerful enough to serve me ANY purpose for an additional 5 years? The oldest computer in my home that's still being used (aka. is running regularly and serving a purpose) is an 11-year-old core 2 duo running on an old 720p TV for some specific gaming tasks. I threw a $60 SSD and Windows 7 in it and it's completely fine for 720p gaming. It runs games like Stepmania and Jackbox just fine, which is perfect for its role as a social gaming rec room machine. It's a fanless build with no mechanical hard drives so it's pretty immune to mechanical failures, short of the PSU (which is still working great). My current primary machine still benchmarks within the top 10% of systems (on Passmark CPU benchmarks), despite being nearly 6 years old. It still serves its primary role as a workstation machine while doubling as a capable, if not stellar, gaming machine. When I buy a new one, this machine will take the role of my Core 2 duo, which will probably be relegated to "File server". My primary will make a great gaming dedicated server as it's an i7 3930k with a lot of RAM. It will be able to do that for the better part of a decade before it becomes useless. It may not be future-proof but it's pretty future-resistant for at least a 10-12 year lifespan simply by changing the machine's role in my house from primary workstation to dedicated server host and social gaming hub. By the time I replace my primary machine, that will put my core 2 duo at nearly 15 years old, yet still being useful as a file server. That's pretty future-proof by my standards.
  12. If anything, the answer seems to be a combination of these two responses: 1) "Not enough games would benefit from it. Every game benefits from a GPU. Not every game would benefit from a physics card, therefore it can't be ubiquitous the same way that a GPU is." 2) "The latency involved in divvying up the physics workload to a separate piece of hardware and getting the result back wouldn't result in a significant performance increase." I can respect both of those arguments, especially the first one. If most gamers don't need one, why bother? To the second point, I have a crazy idea: A handful of games are designed to be played with specific controllers: Flight sticks, farm equipment controllers, steering wheels, etc. This seems to be viable enough that it's still a thing in today's market. Using that same logic, couldn't a game developer feasibly make an internal PCI-E card designed to enhance their game (or their engine), reaching performance levels previously unheard of on modern hardware in their specific software? Some specialized controllers cost $200-300. Paying $200 for a PCI-E card to enhance a game seems pretty equivalent.
  13. This is true, but we've seen more than a handful of games that do, from Universe Sandbox (a physics simulator) to Planet Coaster (which can push my 3930k to over 80% load without much trouble). Part of the reason it seems we don't see games that use 20 threads is that developers can't assume a user has 20 threads. If every gaming build had a 1000-core physics card, like they do a 1000 CUDA core GPU, this would no longer be an applicable argument. These are the kinds of games where physics cards seem to make the most sense. I do understand that and I see your point. Physics calculations are point of contention, where the general rule seems to be "the more physics calculations you can get done in a frame, the better performance will be and the more accurate the physics simulation will be." It seems similar to a GPU in that sense, where more processing power always translates into a better experience (higher resolutions, larger textures, etc.). Since that seems to be the case, being able to simulate physics more accurately for more objects seems to be a natural course of action for games to take. Games like Kerbal Space program don't multithread well, but imagine if it did - instead of being stuck on something like Unity Engine, a hypothetical KSP game in the future could use a 1000-core physics card for its rocketry number crunching, rather than an 8-core CPU (or 2-3 threads in the case of KSP! Blah). Wouldn't having the processing power to do extra precision here be a guaranteed performance gain?
  14. With gaming CPUs having, at most, 20 threads, and physics calculations potentially applying to hundreds or thousands of objects in a scene, using the CPU for physics calculations seems like a huge bottleneck in terms of what we can do with games. Using the GPU for physics calculations seems wasteful as well, as now we're 1) Using a portion of the GPU die for physics work that could be dedicated to more shader work and polygon crunching or 2) Using a portion of the GPU's performance for physics calculation, again cutting into that graphics processing bottom line. This question can be generalized even further - I understand that the CPU is a general-purpose processor - jack of all trades. This seems to suggest that, taken to the logical extreme, an ideal gaming build would have CPU for game instructions, a GPU for graphics, a physics card for physics simulation, an AI card for AI simulation, etc. Putting all that into a general purpose 4-20 thread CPU seems like a huge performance limiter. I don't expect it to get this crazy, but... Considering we decided years ago to make graphics its own dedicated card / processor, why did we stop there? It obviously worked with GPUs. There seems to be something to this. What gives?
  15. Some games benefit from 8+ GB of RAM more than others. In some games, the difference may be a slight improvement in load times.It's the difference between keeping some old data cached in RAM, without needing to completely remove it. In others, it may be a massive performance boost, if not all data can be loaded into physical memory at all. Games with dedicated servers or a lot of mods installed start shooting the RAM usage into the stratosphere, You don't really see these features on consoles the same way. Adding a dedicated server to a game I'm playing has increased my RAM load by 3-4 GB in some cases. Also, sometimes a game will use whatever RAM is available (to a point). One example of this is 7 Days to Die. On my 64 GB machine, I've seen it use upwards of 10 GB in Task Manager. While playing with my roommate, who has 8 GB of RAM, I see it more around the 5-6 GB range on his machine. The game literally uses more RAM because it has more RAM to work with, improving general performance and not unloading data as aggressively.
×