Jump to content

Archer20

Member
  • Posts

    121
  • Joined

  • Last visited

Everything posted by Archer20

  1. Unpopular opinion, but electric cars are made to be disposable. Right to repair or not, the current design doesn't permit feasible, and cost effective replacements. I cannot see a future "more service friendly" design being feasible either. An internal combustion vehicle will for a long time, be easier, cheaper and more practical to keep on the road. Even many many years past it's serviceable life. I can get a carb rebuild kit for 30 dollars for my 1955 Chevy. Even my modern Silverado, everything is very serviceable, and parts are relatively cheap. EDIT: Even for modern internal combustion cars, the tech can be problematic sometimes, and companies withold valuable shop manuals and electrical schematics moreso than ever. Some even try to keep some functions and programming locked behind proprietary scan tools. This is... thankfully defeated in most cases by the aftermarket. Now, cars aside, I am a staunch opponent of manufacturers soldering RAM to the mainboard of laptops, as well as SSDs. This makes the units much hard to service, and upgrade. Dell even does this with their Latitudes now, which are business oriented!
  2. This doesn't make sense for a huge reason already mentioned in this thread: I thought the goal was to reduce power consumed by systems, while still increasing performance. What is occurring is basically the same thing as making a car engine bigger displacement/more cylinders and consuming more fuel for more power, instead of refining the design. I already have a 1000W PSU, as an example. If GPUs are going to start consuming 400-600W, that is 50% of my PSU's total capacity. Many enthusiast processors consume 140-250W. Just seems like alot of power draw, and we are going to start seeing 1500-2000w PSUs and dual PSU solutions become more common. A single card consuming what a SLI/Crossfire did years ago.... it's nuts.
  3. I'd hedge that they will release it, or at least do a limited release towards the end of next year (or maybe summer). There are currently rumors of a 3080 Super circling about. If rumors are true, those would release in Jan 2022. I don't really know what NVIDIA is doing nowadays. Things used to be so simple. xx50, xx60, xx70, xx80, xx80 Ti, and Titan. And the best part, they were all readily available to purchase!
  4. Maximus hero has always been a great choice! Congrats!. It will be worth it for the peace of mind
  5. As good as the 970 was for it's time, SLIing it won't net you good results for most modern titles. It would be better to sell it as others suggested and buy a better single card solution.
  6. There are many ways to induce heat fatigue to a PSU, I only mentioned one way. Indeed, it appears you left a bit of capacity unused. I don't know what you had in your prior build however. Ryzen came out in 2017, so it's safe to assume you had a build prior that could have pulled more power or induced more heat.
  7. Typically thermal limits are set through BIOS. I don't believe there is a software solution for that. IMO don't run the computer until you get a replacement fan.
  8. I'd have to do some research. It could be any number of things. Anything from hardware, to firmware, to software can cause high DPC. It may be the firmware(BIOS) on the board or the way components are configured on it. RAM latency is different, as is cache latency.
  9. Exactly. That is why I always leave at least 25% of wattage to be unused.
  10. Personally, I would only recommend ASUS and ASrock boards. MSI and Gigabyte I've seen way too many issues with.
  11. It's highly dependent on the game, as you've ascertained. CPU dependent games will suffer much more. Battlefield is a CPU dependent game for sure. I've seen high interrupts/DPC latency cause microstutter, but it was induced by Corsair Cue 2 for the RGB lighting. I've also seen malfunctioning USB devices cause the same stutter by slamming the CPU with interrupts.
  12. That would be the northbridge, which is where the VRMs are. If a VRM is toast, that would certainly cause issues. Sounds to me like it is indeed MOBO.. Your welcome!
  13. It...can. It depends on how much latency and how many CPU interrupts are generated. Needless to say, you should be fine.
  14. If you smelled something burning, the first thing I would do is, in the direction of the smell, shine a light and check for burned circuitry on the board. Have you investigated that? That would 100% confirm board.
  15. Forgive me, I have a 120 dollar Blu-Ray burner that is still sitting on my shelf from 8 months ago that I have not installed.
  16. That is correct. EPYC is SOC(or system on chip).
  17. Ya, threadripper does most tasks with ease. EPYC is mainly for server tasks that require a ton of resources, and may perform worse in some cases for the tasks that you mentioned.
  18. Excellent explanation for him. Here I am using terms like half duplex and collision(referencing packet collision in a net hub). Sometimes I speak too much like an engineer ?
  19. PCI-E 3.0 is still difficult to max out. You are fine. Unless you plan to run tons of NVME SSDs.
  20. It's something called "half duplex". If the devices are all trying to transmit at the same time, yes there will be "collisions" and slowdown.
  21. Ah, I see. Yes, I agree with you there. Let me rephrase. Some tasks that were only once in the realm of server, threadripper can now perform nearly as well.
  22. This question is going to sound rather rudimentary but, did you switch off/unplug the PSU from the wall when you replaced the drive?
  23. I think you are misunderstanding what I am saying. For a larger scale implementation of virtualization, EPYC is the only way to go. Say I had 20 machines to run, each varying in the resources it requires, the extra bandwidth of octa channel and lanes would translate to better overall performance for each machine.
  24. Well that is sort of my point. Threadripper has gotten so powerful, it has replaced servers on the low end(of the server market) in some cases. The fact it can use ECC and has so many lanes, it's excellent. HOWEVER I semi disagree depending on the virtualization needs. Virtualization is heavily memory dependent. Having 8 channels of 1-2 TB of memory is much better for a larger scale of virtualization than quad or hexachannel with a max of 512MB
  25. That would be my use case for EPYC. Virtualization. That's a use case where I feel EPYC is viable aside from a cloud server or datacenter.
×