Jump to content

igormp

Member
  • Posts

    3,747
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    igormp reacted to starsmine in A future with only passively cooled ARM chips   
    yes there is is research about those cycles. No, not a single one of those disputes the current climate crisis. and all that research is valid. HOWEVER

    The papers on global warming have been solid and have been solid and consistent since the 70s. There have been no papers that dispute the facts that has been peer reviewed. No this is not a Dogma issue, no this is not a p-hacking issue, no this is not a reproducibility issue. There is no problem with the quality of the papers on global warming. We are not past the point of minimizing damage, because that point even in a hundred years, is always NOW.  The consensus has always been strong on this point. 

    The best time to plant a tree was yesterday, the second best time is today. This is the mentality you need to have, yes yesterday was a better time to address the issue, that does not, and has never meant you don't address it today, the second best time. 
  2. Like
    igormp got a reaction from porina in A future with only passively cooled ARM chips   
    FWIW, that's a pretty outdated definition given how moderns µarches look like. A Mx chip from apple is hella complex and the front-end is just a minor part of it.
    x86 also decodes many of its instructions into µops that are pretty risc-like.
  3. Agree
    igormp got a reaction from leadeater in A future with only passively cooled ARM chips   
    FWIW, that's a pretty outdated definition given how moderns µarches look like. A Mx chip from apple is hella complex and the front-end is just a minor part of it.
    x86 also decodes many of its instructions into µops that are pretty risc-like.
  4. Agree
    igormp got a reaction from leadeater in A future with only passively cooled ARM chips   
    It is not. Just because your low power phones (which are MEANT to be low power) use less power, and because Apple did a pretty good chip (good luck finding another ARM chip that is as efficient), that doesn't mean that any ARM chip is that good.
     
    Since you mentioned servers, the best ARM CPUs at the moment (Ampere Altra) use 250~350W of power each, their efficiency wasn't much better than the Epycs that were released at the same time, and now they get heavily beaten by both Intel and AMD with their current offerings on both power consumption and performance.
     
    Apart from that, others have already explained that we did improve on the efficiency part, but we require even more performance and push that to an extreme nowadays.
  5. Like
    igormp got a reaction from Agall in Visual Studio CPU Hardware recommendations   
    Afaik, for compile jobs, those are pretty similar, so it ends up more of pricing matter.
    Yeah, there isn't much to discuss about that lol
    They don't. You could even just go with DDR4 and same some bucks/reuse the previous sticks.
    Take a look at the first 3 benchmarks here:
    https://www.phoronix.com/review/linux-ddr5-6000/3
  6. Like
    igormp got a reaction from Agall in Visual Studio CPU Hardware recommendations   
    If that's the case then it's all good. I asked because even 32gb might not be enough for some projects, while others can do fine with even 8~16gb.
    Yeah, GPU is going to be pretty much irrelevant.
    I'd say to skip on a dGPU and use that money for a better CPU, such as the 7950x that was previously mentioned. That's going to be a way better investment.
    Depends on how the project is set up, but it should spawn as much jobs as possible when compiling (talking more specifically about Cpp, I'm not really keen into C# stuff).
    Multi threaded performance is often better than single perf for such workloads.
  7. Like
    igormp got a reaction from Agall in Visual Studio CPU Hardware recommendations   
    How much RAM do those systems have? I'd guess that they're just running out of RAM instead of CPU, keeping a close look at how they use their systems would make it easier to help you with that.
    Not really, no need for a dedicated GPU, just an integrated one will do the job, unless they work with something GPU-related.
    That's if you want to work with CUDA, not that VS will make use of CUDA.
    Funny how you haven't mentioned RAM in any point 😛 
    I'd say no. There are no benefits for CUDA or RTX, it's just a text editor, not a Triple A game, video editor or 3D software lol
     
  8. Agree
    igormp got a reaction from RONOTHAN## in Visual Studio CPU Hardware recommendations   
    How much RAM do those systems have? I'd guess that they're just running out of RAM instead of CPU, keeping a close look at how they use their systems would make it easier to help you with that.
    Not really, no need for a dedicated GPU, just an integrated one will do the job, unless they work with something GPU-related.
    That's if you want to work with CUDA, not that VS will make use of CUDA.
    Funny how you haven't mentioned RAM in any point 😛 
    I'd say no. There are no benefits for CUDA or RTX, it's just a text editor, not a Triple A game, video editor or 3D software lol
     
  9. Agree
    igormp reacted to starsmine in A future with only passively cooled ARM chips   
    you are confusing energy usage with efficiency

    Efficiency is watts to solve a problem. 
    ARM is NOT more efficient then x86 overall, it only is at sub 5W, but saying that also muddies the waters

    If you have a server pulling 1500W to do a petaflop of math
    a 1.5W microcomputer has to do more then a teraflop of math to be considered more efficient. If its only doing sub 1 teraflop, the 1500W server is MORE efficient. 

    So yes you see Nvidia keep upping the wattage every generation with their server parts, but its doing MORE math. a 1.5x increase of power for a 2.25x increase in math performance is worth it. And cooling has gotten more efficient, and better at using water with new data center practices. 

     
      
    All risc chips do µops now as well, 
    CISC and RISC chips (other then like one oddball arm core that uses a microwatt that is placed inside HDMI repeaters) are all Out of Order execution, and are superscalar. 

    CISC just has well... more instructions, aka the more complicated front end. and thats is the main reason it struggles to compete at sub 1 watt because you cant simplify that. x86 has always done µops even back with the 8086. 
    ADD AL, Memory location. 
    Is two uops, and was an ASM command on the 8086. 

    Its just arm makes me go

    LDR R0, Memory location
    ADD R1, R1, R0

    But ARM I can also do

    LDMIA {R0,R3} 
    and thats like 8 uops if I had to guess. 

    People put to much stock in the RISC/CISC debate. 

    Every new version of ARM adds more instructions, "complicating" the front end more, its just far more selective then x86. But that also makes it... so it cant do task accelerated. like there are media decode commands for x86 that make it ASIC like for those kinds of tasks. AKA more efficient then ARM. 

     
      
    I would argue you have that backwards, x86 is a master at many things, just not at being generalized. 
    The MORE specific the task, the better x86 is then arm. until you get to the point where you want an ASIC. 

    ARM generally only has generalized instructions that you have to chain together to do your specific task which is less efficient. 
  10. Informative
    igormp got a reaction from Lurking in A future with only passively cooled ARM chips   
    It is not. Just because your low power phones (which are MEANT to be low power) use less power, and because Apple did a pretty good chip (good luck finding another ARM chip that is as efficient), that doesn't mean that any ARM chip is that good.
     
    Since you mentioned servers, the best ARM CPUs at the moment (Ampere Altra) use 250~350W of power each, their efficiency wasn't much better than the Epycs that were released at the same time, and now they get heavily beaten by both Intel and AMD with their current offerings on both power consumption and performance.
     
    Apart from that, others have already explained that we did improve on the efficiency part, but we require even more performance and push that to an extreme nowadays.
  11. Agree
    igormp got a reaction from Needfuldoer in A future with only passively cooled ARM chips   
    It is not. Just because your low power phones (which are MEANT to be low power) use less power, and because Apple did a pretty good chip (good luck finding another ARM chip that is as efficient), that doesn't mean that any ARM chip is that good.
     
    Since you mentioned servers, the best ARM CPUs at the moment (Ampere Altra) use 250~350W of power each, their efficiency wasn't much better than the Epycs that were released at the same time, and now they get heavily beaten by both Intel and AMD with their current offerings on both power consumption and performance.
     
    Apart from that, others have already explained that we did improve on the efficiency part, but we require even more performance and push that to an extreme nowadays.
  12. Agree
    igormp reacted to Agall in A future with only passively cooled ARM chips   
    Nuclear power
     
    Moore's law is/isn't dead, and GPUs have gotten dramatically more efficient. In the race for more performance though, there's a limitation in physics that we've hit.
     
    The RTX 4090 is an extreme example and can go above 600W, not 500W. Its a 450W TDP card that has a factory vBIOS that allows 133%. The card having 675W available to it, which it can reach in my experience.
     
    An argument against your narrative is the Steam Deck, which can comfortably play AAA games at decent settings at a total 15W TDP.
     
    If you're expecting RTX 4090 performance for <400W, then you'll have to wait for a dramatic improvement in computer engineering, since its simply a limitation of physics at the moment. Sure, we might incrementally get closer, something that DLSS does assist with. In a game like Warframe, I get lower power draw when enabling DLSS with little to no change in visual quality, demonstrating that running 1440p with AI upscaling to 4K is more efficient than raw rasterization of 4K.
     
    Daily drive a Steam Deck or ROG Ally, its a very acceptable experience and is environmentally friendly with regards to the power draw. I did it for over 3 weeks between house closing dates living in a hotel, having only my phone and Steam Deck OLED.
  13. Like
    igormp got a reaction from Kagaratsch in Looking to build the best $6k PC money can buy   
    Nah, just update the bios and 192gb should be good to go. PPP is just outdated.
  14. Informative
    igormp reacted to brob in Looking to build the best $6k PC money can buy   
    Pcpartpicker uses manufacturer specs. AMD still lists the CPU with max memory of 128GB.
     
  15. Like
    igormp got a reaction from Kagaratsch in Looking to build the best $6k PC money can buy   
    It's better to go with 5200~5600MHz sticks in case OP ever wants to upgrade with another couple sticks, getting 4 high-density DIMMs to run at 6000MHz is really hard.
    Not really much benefit for most of what I do.
    I did want to get a nvlink for some large LLM fine-tuning (which would require me to do model-parallel, hence making the data transfer the bottleneck), but the 3-slot ones are really expensive and not worth the cost for my case.
     
    Anyhow, if it's a possibility, it's better to think about it now because mobo selection is really hard. If you go for a non-compatible mobo and try to get a 2nd GPU later, you'll likely need to swap mobos.
     
    In case you mind that, here's a build with 192GB that should work out of the box, AVX-512 and that allows for a 2nd GPU later on:
    PCPartPicker Part List
    CPU: AMD Ryzen 9 7950X3D 4.2 GHz 16-Core Processor  ($617.00 @ B&H) 
    CPU Cooler: ARCTIC Liquid Freezer III 56.3 CFM Liquid CPU Cooler  ($90.08 @ Amazon) 
    Motherboard: Asus ProArt X670E-CREATOR WIFI ATX AM5 Motherboard  ($439.99 @ Amazon) 
    Memory: G.Skill Flare X5 96 GB (2 x 48 GB) DDR5-5600 CL40 Memory  ($284.99 @ Amazon) 
    Memory: G.Skill Flare X5 96 GB (2 x 48 GB) DDR5-5600 CL40 Memory  ($284.99 @ Amazon) 
    Storage: Western Digital Black SN850X 1 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive  ($84.99 @ Amazon) 
    Storage: Crucial T700 W/Heatsink 4 TB M.2-2280 PCIe 5.0 X4 NVME Solid State Drive  ($446.99 @ B&H) 
    Video Card: Gigabyte WINDFORCE V2 GeForce RTX 4090 24 GB Video Card  ($2049.98 @ Amazon) 
    Case: Fractal Design North XL ATX Full Tower Case  ($179.99 @ B&H) 
    Power Supply: ADATA XPG Core Reactor II 1000 W 80+ Gold Certified Fully Modular ATX Power Supply  ($129.99 @ Amazon) 
    Total: $4608.99
    Prices include shipping, taxes, and discounts when available
    Generated by PCPartPicker 2024-04-11 17:25 EDT-0400
  16. Like
    igormp got a reaction from Kagaratsch in Looking to build the best $6k PC money can buy   
    Wait, are you planning on doing your ML stuff on windows? Weird but ok. Nonetheless, CPUs don't really require drivers nor they have anything to do with the GPU.
     
    That's irrelevant, you can still do data parallel or model parallel training through PCIe without issues. That's how I use my 2x3090s without NVLink.
    Double the vram or double the compute depending on how you make use of those.
  17. Like
    igormp got a reaction from Kagaratsch in Looking to build the best $6k PC money can buy   
    How relevant/intensive is it for you? Because at that price point you could go for a 7950x (non-X3D model) instead for AVX-512 (which is going to outpace intel in most data stuff), go for the x3D model if you want a meh balance between games and productivity, or just go for the intel model if games is the top priority.
     
    You could also easily go with two GPUs with that budget.
  18. Agree
    igormp got a reaction from Cyberspirit in (cursed setup rig showcase) Can We Put A Cursed Category For Setups?   
    I honestly don't see what's cursed about that. Is it just the old kb+m?
     
    I guess this qualifies for a regular post on the "show your build" thread.
  19. Agree
    igormp got a reaction from Alex Atkin UK in How do I use Linux Mint Safely?   
    No. You'll hardly ever find something "infected" for linux.
    If you use the system's default repos, yes, it's pretty safe.
    No, always prefer to install stuff from the system's repos. In case that's not available, then you'd do as you do on windows, check if it seems trustable based on the authors, where you're downloading it from, people's comments, etc etc.
    Not really, defaults should be good enough.
  20. Agree
    igormp got a reaction from IR76 in How do I use Linux Mint Safely?   
    No. You'll hardly ever find something "infected" for linux.
    If you use the system's default repos, yes, it's pretty safe.
    No, always prefer to install stuff from the system's repos. In case that's not available, then you'd do as you do on windows, check if it seems trustable based on the authors, where you're downloading it from, people's comments, etc etc.
    Not really, defaults should be good enough.
  21. Agree
    igormp got a reaction from da na in (cursed setup rig showcase) Can We Put A Cursed Category For Setups?   
    I honestly don't see what's cursed about that. Is it just the old kb+m?
     
    I guess this qualifies for a regular post on the "show your build" thread.
  22. Agree
    igormp got a reaction from Linuswasright in Dual 4090 build for deep learning 2024 April   
    No, that's irrelevant.
    Not with any consumer platform (be either Intel or AMD).
    Yeah, there are some x8/x8 capable AM5 and LGA1700 motherboards, but they're pretty high-end and not cheap at all. With AM4 it was easier to find such mobos.
    Not really, goes on a case by case, 2tb might be more than enough for them.
  23. Agree
    igormp got a reaction from RONOTHAN## in Dual 4090 build for deep learning 2024 April   
    I'd say a 7950x or a 7900 is a better pick due to AVX-512.
    It really depends on which kind of models you are planning to train on. If you're distributing your model across the GPUs, then the PCIe lanes becomes a massive bottleneck.
    Only 64gb for 48gb of vram? Seems quite low, but I guess you should know your workloads better.
     
    Can you give more details on what you exactly want to train? Architecture, dataset sizes, model size, etc etc
  24. Agree
    igormp got a reaction from RONOTHAN## in Dual 4090 build for deep learning 2024 April   
    No, that's irrelevant.
    Not with any consumer platform (be either Intel or AMD).
    Yeah, there are some x8/x8 capable AM5 and LGA1700 motherboards, but they're pretty high-end and not cheap at all. With AM4 it was easier to find such mobos.
    Not really, goes on a case by case, 2tb might be more than enough for them.
  25. Agree
    igormp got a reaction from Linuswasright in Dual 4090 build for deep learning 2024 April   
    I'd say a 7950x or a 7900 is a better pick due to AVX-512.
    It really depends on which kind of models you are planning to train on. If you're distributing your model across the GPUs, then the PCIe lanes becomes a massive bottleneck.
    Only 64gb for 48gb of vram? Seems quite low, but I guess you should know your workloads better.
     
    Can you give more details on what you exactly want to train? Architecture, dataset sizes, model size, etc etc
×