-
Posts
3,672 -
Joined
-
Last visited
Awards
This user doesn't have any awards
About igormp
- Currently Viewing Forums Index
Contact Methods
-
Twitter
igormp
- Website URL
Profile Information
-
Gender
Male
-
Location
Brazil
-
Interests
Embedded systems, computer architecture, machine learning
-
Occupation
Data Engineer
System
-
CPU
Ryzen 5950x
-
Motherboard
Asus B550 ProArt
-
RAM
4x32GB Corsair LPX 3200MHz
-
GPU
Gigabyte RTX 3090 Vision
Gigabyte RTX 3090 Gaming -
Case
Sun Ultra
-
Storage
Kingston KC3000 1TB + XPG S11 Pro 2TB + random 2.5" SSDs and HDDs
-
PSU
XPG Core Reactor 850W
-
Display(s)
LG C2 42" OLED
-
Cooling
Scythe Fuma 2
-
Operating System
Arch Linux
-
Laptop
LG Gram 14
MBP 14" M2 Pro
Recent Profile Visitors
igormp's Achievements
-
No worries, that new setup will be many times faster than your current setup anyway! Enjoy your new system
- 14 replies
-
- pc builds
- rx 7800 xt
-
(and 1 more)
Tagged with:
-
Do you plan on doing gaming at 120hz+? If not, then I don't think the X3D model is worth it. You could save a bit of money going with a 7700x, or improving your other workloads with a 7900x.
- 14 replies
-
- pc builds
- rx 7800 xt
-
(and 1 more)
Tagged with:
-
Multi GPU build for NLP/LLM development
igormp replied to carter_'s topic in New Builds and Planning
If you want to tinker more with getting stuff to work than actually getting stuff done, then go with AMD. Otherwise, for an (almost) out of the box experience you'd be better off with nvidia. What models are you planning to work with? I personally have an AM4 setup with 2x3090s and it serves me more than fine for local training/inference, and I can always jump into a proper A100 cluster for anything larger. -
It has python for data stuff, which will likely use numpy, which does make use of avx-512, something that inte lacks. So AMD is way better in this regard. Fun fact: Intel was the one that added avx-512 support to numpy, even though they don't support it anymore in their consumer products lol
-
AMD is basically a no-go for ML. ROCm is still a pain and performance is really lacking compared to nvidia. How big are your datasets? Are you sure they're really that large? Millions of records as in 7 digits isn't that large, but 9 digits is a different scale. Asking because the rough idea is that for every 1gb of data, you'd need 5~10x more RAM in pandas, so with your current setup you wouldn't be going past 5gb datasets. Since you already have 32gb of ram in your laptop, can you double check and try to do a load and maybe a sort on your pandas df and see if you run out of ram or not. If you do, then it'd be better to go for more ram straight out of the bat, like with 64 or 96gb. Otherwise your datasets are on the smaller side and 32gb ought to be enough. Intel lacks avx-512, which can be heavy used in numpy/pandas for major speedups.
- 14 replies
-
- pc builds
- rx 7800 xt
-
(and 1 more)
Tagged with:
-
Multi GPU build for NLP/LLM development
igormp replied to carter_'s topic in New Builds and Planning
If you are going to do fine-tuning, then going for x8 instead of x4 is likely going to be better, specially if you're working with larger models that will require those 72gb of vram. Aren't used 3090s an option? Two of those could do you good. Also, be aware that there are only few AM5 Motherboards that allow you to do x8/x8 on its slots, the one you did you'd need to do tons of hacks with risers on top of risers to split those lanes. I'd avoid the 7900 xtx for your usecase, rocm is still a pain for some stuff. -
How many scientists do you have? SLURM is nice, but also pretty annoying. As an example, if your devs are using VSCode jupyter notebooks to do stuff on the GPU server, it wouldn't be possible with SLURM anymore since there are no interactive sessions (AFAIK). If you're ok with having your devs submit a bash job and do some workarounds to start a jupyter server and then connect VSCode to it, then it could work, and you wouldn't have the issue of multiple folks trying to use the same GPU anymore. You could also look into MIG instances for that A100 to properly share it.
-
Revisiting an eight month old machine learning dual 4090 build
igormp replied to fahraynk's topic in New Builds and Planning
You haven't answered my previous questions about your use case, so I can't properly answer this. -
Revisiting an eight month old machine learning dual 4090 build
igormp replied to fahraynk's topic in New Builds and Planning
I guess you're thinking about games. For training models that don't fit in a single GPU, the bandwidth between those become the bottleneck, so even with 3090s going from x8 to x16 in both does make a difference (not huge, but noticeable). -
Revisiting an eight month old machine learning dual 4090 build
igormp replied to fahraynk's topic in New Builds and Planning
Newegg has tons of options: https://www.newegg.com/p/pl?N=100007952 601387037 600006165 There's a wide list of overclockable RDIMMs for TR now: https://www.tomshardware.com/pc-components/ddr5/ddr5-7800-rdimms-coming-to-ryzen-threadripper-7000 Check the mobo's QVL and try to pick whatever fits your budget/needs best. -
Where is this GPU innovation?
igormp replied to Gat Pelsinger's topic in Laptops and Pre-Built Systems
I don't think people who are looking for such laptops care for the GPU performance. Personally I'd never buy a laptop with a power hungry GPU, I'd rather go with something that allows me to play media and attach a 4k screen for browsing while consuming the least amount of power as possible. Anyhow, going back to your original question, AMD is going to release new APUs with more channels of memory in a custom design (so don't expect it in your regular ATX form factor), which should have a pretty hefty iGPU. -
Need some Linux NIC help, please (connection dropping randomly)
igormp replied to Sarra's topic in Networking
Maybe try to update the firmware. -
Is WSL as good as a true Linux install?
igormp replied to Gat Pelsinger's topic in Operating Systems
It's slow compared to a bare metal install, but it's still way faster compared to windows. I don't remember why, but compilers on windows have some issues doing stuff fast. I guess that's related to antivirus stuff or how windows deals with file access. If your workloads aren't that IO bound and you still need windows for other stuff, it's actually pretty doable for actual, proper work, not only learning. -
Is WSL as good as a true Linux install?
igormp replied to Gat Pelsinger's topic in Operating Systems
WSL2 is a full VM under Hyper-V with lots of quality of life stuff baked in. For your needs it will do perfectly, go ahead and go crazy. Worst comes you can just nuke and and restart from scratch. As for you post title: It is not, there are some noticeable performance issues and other minor limitations, but for your specific case I'd say it's the best bet and those differences from a native install shouldn't hinder you in anyway (nor will you notice it). So yes, please do WSL, answering your stuff about compilers on Windows and WinAPI is a bit weird lol -
Need some Linux NIC help, please (connection dropping randomly)
igormp replied to Sarra's topic in Networking
Can you keep a terminal open running "sudo dmesg -wH" and try to keep a while at it when the issue occurs? Logs would be great for that. Seems like this NIC has an Aquantia chip on it, so you can filter out the logs by doing a "sudo dmesg | grep atlantic" to see driver-specific logs. You could also try to update the firmware on your NIC.