Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards


About igormp

  • Title

Contact Methods

Profile Information

  • Location
  • Gender
  • Interests
    Embedded systems, computer architecture, machine learning
  • Occupation
    Data Engineer


  • CPU
    Ryzen 3700x
  • Motherboard
    AsRock B550 Steel Legend
  • RAM
    2x32GB Corsair LPX 3200MHz
  • GPU
    Gigabyte 2060 Super Windforce
  • Case
    Sun Ultra
  • Storage
    WD Black SN750 500GB + XPG S11 Pro 512GB + random 2.5" SSDs and HDDs
  • PSU
    Corsair CX650m
  • Display(s)
    LG UK6520PSA, 43" 4k60
  • Cooling
    Deepcool Gammax 400 V2
  • Operating System
    Arch Linux

Recent Profile Visitors

1,484 profile views
  1. Require? No, but you can compile some stuff to make use of it. As an example, you can easily add -mprefer-vector-width=512 to make use of it on gcc.
  2. Just a FYI, both the 3070 and 3080 are better than the K80/T4/P100 at colab (even at colab pro), specially if you're using mixed precision.
  3. And how am I going to carry that awful thing onto a client meeting, and how long will it last on batteries for a full day of meetings? Yeah, not everything is about games.
  4. That 6+6 stuff is transparent to most softwares. Games don't see it this way, they see it as full 24 threads (they don't even make a distinction of what's a regular core and what's a thread). For all that matters, the 5600x is a 12 threaded CPU, the 5800x has 16 and the 5900x has 24. Programs are not aware of CCDs or CCXs.
  5. I wouldn't blame the low core count of the 7740x to be the issue tbh. Also, frame rates on that game isn't a good indicator of performance (since it'll degrade to 20~40fps late into the game no matter your setup), but rather the turn time (as you posted later in your reply): (from: https://overclock3d.net/reviews/cpu_mainboard/amd_ryzen_9_5900x_and_ryzen_9_5950x_review/19 ) So yeah, core count doesn't really matter. For the current games and whatnot, it won't matter much. Go for what you can afford without worries, even if it may seem overkill currently, it might
  6. A 5600x would be even better. Civ VI's AI is single-threaded. lol no, most of those games are written like shit, specially civilization.
  7. A 2080ti would be perfect (a 3080 is lacking in vram for SOTA models, and 3090 too expensive), but you might not be able to find one. If you get lucky enough, an used V100/Titan V/Quadro GV100 would be perfect. Performs better than the current consumer Ampere GPUs, while also having more memory. A k80 would net you a nice amount of vram, but it's really lacking in performance. Going for Volta/Turing/Ampere is also better because you can use tensor cores, which can do mixed precision and basically make your training twice as fast while halving the memory need.
  8. If you don't plan on overclocking, nor mind or CPU reaching somewhat high temperatures, and also isn't a silent freak, the stock cooler should be more than enough. I have a gammax 400 because my 3700x didn't come with a CPU. The cores are also barely touched when training models, and data mangling is usually stuck at a single core anyway. Do get more RAM, I have 64gb and it's barely enough for medium-ish datasets.
  9. Xeons would give you high ram capacity, but not enough cores. EPYCs give you cores and RAM, but lack consumer features. Regular Threadripper would give you cores and consumer features, but lack of ram. I can think of Threadripper PRO as a good option, if you can manage to get one of those. PCIe 5.0 isn't in mass production yet, you should see some platforms supporting it by the end of this year. There's nothing off-the-shelf for direct storage, but Linux already has all of the APIs in place for P2P DMA, so you could roll your own stuff.
  10. Doesn't matter. Most ML stuff is reliant on GPU, and those 2 extra cores won't help much. It might be good if you're compiling tons of stuff, otherwise getting a faster SSD and more RAM would benefit you more, specially if you're dealing with nodejs.
  11. You can't use it in tandem with your current CPU, as "extra cores". It's meant to be used as an external accelerator, or by itself. It's not secret, you can do OpenCL or OpenMP on it quite easily, but it's not meant to make your browser faster. I get what you meant in toder to make it easier for him to understand, but Xeon Phis are x86 nonetheless. But just like some smartphone modems are x86, and yet you can't simply use those to make a regular CPU go faster.
  12. It's possible to do so on Hyper-V. You can even get GPU partitioning (GPU-P), as mentioned by someone above.
  13. Deep learning... On a hackintosh? If you use anything modern, you won't have any nvidia support, and amd is a no go for such tasks.