Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Grabhanem

Member
  • Content Count

    832
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

About Grabhanem

  • Title
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Typically that was mini-PCIe, it's physically different but electrically the same as PCIe x1 + USB I think. Modern laptops tend to use M.2 A/E key, which is essentially the same connectivity.
  2. Even if you are doing those things, as long as you only need 4.0 on a graphics card and one SSD there's no real reason to go X570, since many B550 boards overclock just as well.
  3. X570 and B550 have the same CPU support. (edit: not exactly the same- X570 supports older CPUs while B550 doesn't- but the relevant upgrades are the same) X570 released first, so for a while it was the only high-end AM4 platform. It also is the only option for PCIe 4.0 through the chipset, but since there aren't many devices that take advantage of that, I wouldn't consider it a major reason to go for it.
  4. X570 doesn't really offer any future-proofing that B550 doesn't -- dual GPU is dead for the foreseeable future, and both support up to Ryzen 5000 and no further, as next gen is likely going to be AM5.
  5. Solidworks isn't actually that GPU heavy, it's just hard on consumer cards because it makes very heavy use of Quadro-driver-specific features- iirc anti-aliased lines is one common one. For Solidworks in particular, the P2200 is well ahead of every Pascal/Turing consumer card except the Titan RTX, which has professional drivers of some sort as well. I'm not sure whether the 3090 would have some form of professional driver, being a Titan replacement, but my suspicion would be no, and the P2200 would definitely beat anything below that.
  6. Dumb question-- are you using the motherboard display outputs instead of the graphics card's?
  7. I'd suspect the AI overclocking is causing problems. The 8700K is still a very strong CPU, but that 5.6 GHz number is almost certainly an error of some kind-- the highest overclock to be achieved on HWBOT Cinebench R15 (a fairly light benchmark) on ambient cooling was only 5.5 GHz. I'd try turning that off, since that kind of thing tends to cause more problems than it solves by pumping the CPU with voltage.
  8. You realize quadros are made by Nvidia right >.>
  9. If you're only using Solidworks, I wouldn't recommend either. Solidworks is very reliant on having Quadro drivers, so a Quadro P2200 or similar will do better than any gaming card.
  10. If they're 3-pin fans, you can also power them off one header by just stacking the Molex connectors and plugging in one of the 3-pin cables-- it'll effectively act as a splitter.
  11. If I remember correctly, Nvidia supports Freesync, but only over Displayport.
  12. Probably, although I'd be very surprised by a 128-core, since that would require somehow fitting 16 CCDs onto a single chip.
  13. I'd guess the main problem would be memory bandwidth. GPUs require an absurd amount of memory bandwidth in order to function efficiently, and running memory at that speed is just not feasible in a socket because of the electrical limitations of not having a physically soldered connection. Plus, the physical sizes of GPUs vary quite a bit within the same product stack, so you'd have a choice of either limiting upgradeability to a few GPU models (kinda defeating the purpose) or paying for a bunch of socket area you won't use. Both Intel and AMD did experiment with card-based CPUs in
×