Jump to content

Cores and threads for games.

These questions may sound stupid but excuse my ignorance, thanks. 

Standard gaming system questions

System use (gaming - triple A/e-sports, Streaming, browsing YouTube/flowplane, twitter, zoom, facebook, web browsing - (3D model research), netflix, CrunchyRoll, toggle TV.) 

 

Questions 

1. When my classmates in computer class say "my computer at home utilize how many cores/threads while I game" is there a diffrence it being cores or threads? 

 

2. I understand *hrz* affect the speed of the processor but does having how many affect it? 

*note: understands how many affects multi-tasking *

 

3. APUs and intel intigrated graphics - basic use wise they are fine but when in a desk-mini 300 is it possible to use it for CAD design or is a full system still advisable? 

 

Thanks for answering 

Silent Cerberus - Fractal Design Core500,Ryzen R7 3700x,Scythe Fuma 2,Gigabyte Aorus B550i Pro AX,Crucial Ballistix Elite 3200mhz cl16 16gb,Adata XPG SX8200 nvme 512gb m.2 ssd,Lexar NS100 1tb ssd,Red Dragon RX5700 8gb(bio flashed 5700xt),Corsair SF600 SFX 600w 80+ platinum F-Modular. 

 

Hades - dO.Ob Look into my eyes and see the dark eternal abyss that is your soul and pay for your hidden sins. 

Cerberus - the hell hound that guards the gates of hell. Once you enter you'll never escape. 

Link to comment
Share on other sites

Link to post
Share on other sites

It heavily depends on what you're doing. If all you're playing is Brood War then even a single core Athlon 64 is plenty. If you want to play RDR2 then 6 cores/threads minimum and 8 recommended.

Link to comment
Share on other sites

Link to post
Share on other sites

Technically, software uses threads to process stuff in parallel.

 

A CPU has cores. Each core can proccess one thread at a time, independent from the others.

 

If a software uses more threads than your CPU has cores then parallel execution is "simulated" by time-slicing. Meaning CPU cores quickly switch between different threads making it appear as if stuff is running in parallel even though technically it is serial.

 

Some CPUs also support simultaneous multi-threading (SMT), also known as Hyperthreading (for Intel).

 

This is a technique that allows a single core to more quickly switch between threads and/or process some stuff in parallel which can boost performance (usually up to 30%). This is also often referred to as a core having "multiple threads" (typically 2).

 

The speed of the processor (Hz) determines how quickly it can process stuff. You can't really compare Hz between different CPU architectures/generations though. Newer CPUs may be faster even though they have less Hz. But in general more Hz = faster.

 

If a software is able to utilize many threads, then more cores can be more important than faster cores. If a software only has a single thread (or very few) faster cores will be better than many (unused) cores.

 

A lot of (older) games don't really use multiple threads, so in general faster CPUs are better at gaming than CPUs with many cores. This is slowly changing, meaning more modern titles can make better use of multi-core CPUs.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, StrawberryShortCakes said:

These questions may sound stupid but excuse my ignorance, thanks. 

Standard gaming system questions

System use (gaming - triple A/e-sports, Streaming, browsing YouTube/flowplane, twitter, zoom, facebook, web browsing - (3D model research), netflix, CrunchyRoll, toggle TV.) 

There are no stupid questions, only stupid answers.

 

Quote

Questions 

1. When my classmates in computer class say "my computer at home utilize how many cores/threads while I game" is there a diffrence it being cores or threads? 

Cores = Threads unless the underlying CPU is hyperthreading, or SMT (Simultanous multithreading), in which case these are logical cores (not physical cores) to the underlying OS. For the sake of simplicity, 1 hyperthread core = 20% of an actual core, so a quad core presenting eight logical cores only has a performance spec of four cores, but under highly threaded loads it may perform like there's cores at half the performance. So there is not much of a net increase (like I said, about 20% per core) unless something is written to take advantage of it. In a server environment hyperthreading tends to not be worth having, since if the load goes up, the performance per thread goes down. In a desktop environment, there's many situations where hyperthreading might just work ok, but they're usually not game situations.

 

Quote

2. I understand *hrz* affect the speed of the processor but does having how many affect it? 

*note: understands how many affects multi-tasking *

Mhz/Ghz is just the clock speed of the processor and is a meaningless metric, and has been meaningless since the Pentium Era. The L1 and L2 cache on a CPU make a substantial difference that a 3.2Ghz Celeron with 256k L2 and 2MB L3 is completely spanked by every chip above it (with Core i9's having 20MB L3 cache), even ones at half the clock speed.

 

image.png.c4853ad283a4cf6d2d9ef3a8d5554d48.png

Please note there is only one sample in the benchmark list, so it's accuracy will be in question. At any rate the note the single-thread rating is roughly 30% difference despite only a 300Mhz difference. Also note the price, 16x more expensive. So 8.7x more performance for 16x the price. Another thing to point out, in reference to the first question, note that the Celeron is only 2 cores (2 threads) but the highest end part is 10 cores with 20 threads (20 logical cores), so even though it tells the OS it has 20 threads, it only has 8.7 performance of the lowest end part that only has 2 cores period. So do the math there and that's like having 17.4 celeron 3.4Ghz cores, not 20.

 

 

Quote

3. APUs and intel intigrated graphics - basic use wise they are fine but when in a desk-mini 300 is it possible to use it for CAD design or is a full system still advisable? 

 

Thanks for answering 

Absolutely inadvisable.

 

Your project requirements may vary, but Autocad 2018 and similar products care more about the memory bandwidth on the dedicated GPU part, of which that means a dedicated GPU (dGPU) that far outstrips the integrated graphics is required. Even a 15" "gaming" laptop is not really powerful enough to use AutoCAD.

 

And I have to say this over and over on this forum, but CAD software will just plain melt integrated graphics, it doesn't matter if the iGPU is so worthless than the CAD software runs entirely on the CPU, the fact that it requires high levels of memory bandwidth on the GPU is why a Geforce x60 tier (eg GTX 1060, 1660, 2060 etc) or Quadro x1000 tier is needed just to get within spitting range of the "recommended" spec. The recommended spec is usually not available in anything but 17" laptops, and even in 17" laptops, the laptops cooling fans will be maxed out the entire time the software is running. When people use integrated graphics with CAD, the performance is so substandard that the business is literately losing money on your degraded productivity. If your company is unwilling to get you a 17" CAD laptop or desktop with a Quadro x3000 part in it, the business is probably walking away from money because they won't supply their engineers with productive hardware.

 

When IT staff then try to cheap out with docking stations that don't actually use Displayport alt-mode the staff lose even more productivity when the CAD software will not use the dGPU with the Displaylink (not displayport) software GPU presented by the cheaper docking station.

 

Like it might be different for something like solidworks (which is more engineering of individual 3D objects) but when you're working on massive projects like buildings, stores, towers, manufacturing plants, etc, being cheap about your engineers is just going to lose you bids on things when you can't deliver on time.

 

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Eigenvektor said:

Technically, software uses threads to process stuff in parallel.

 

A CPU has cores. Each core can proccess one thread at a time, independent from the others.

 

If a software uses more threads than your CPU has cores then parallel execution is "simulated" by time-slicing. Meaning CPU cores quickly switch between different threads making it appear as if stuff is running in parallel even though technically it is serial.

 

Some CPUs also support simultaneous multi-threading (SMT), also known as Hyperthreading (for Intel).

 

This is a technique that allows a single core to more quickly switch between threads and/or process some stuff in parallel which can boost performance (usually up to 30%). This is also often referred to as a core having "multiple threads" (typically 2).

 

The speed of the processor (Hz) determines how quickly it can process stuff. You can't really compare Hz between different CPU architectures/generations though. Newer CPUs may be faster even though they have less Hz. But in general more Hz = faster.

 

If a software is able to utilize many threads, then more cores can be more important than faster cores. If a software only has a single thread (or very few) faster cores will be better than many (unused) cores.

 

A lot of (older) games don't really use multiple threads, so in general faster CPUs are better at gaming than CPUs with many cores. This is slowly changing, meaning more modern titles can make better use of multi-core CPUs.

 

17 minutes ago, Kisai said:

There are no stupid questions, only stupid answers.

 

Cores = Threads unless the underlying CPU is hyperthreading, or SMT (Simultanous multithreading), in which case these are logical cores (not physical cores) to the underlying OS. For the sake of simplicity, 1 hyperthread core = 20% of an actual core, so a quad core presenting eight logical cores only has a performance spec of four cores, but under highly threaded loads it may perform like there's cores at half the performance. So there is not much of a net increase (like I said, about 20% per core) unless something is written to take advantage of it. In a server environment hyperthreading tends to not be worth having, since if the load goes up, the performance per thread goes down. In a desktop environment, there's many situations where hyperthreading might just work ok, but they're usually not game situations.

 

Mhz/Ghz is just the clock speed of the processor and is a meaningless metric, and has been meaningless since the Pentium Era. The L1 and L2 cache on a CPU make a substantial difference that a 3.2Ghz Celeron with 256k L2 and 2MB L3 is completely spanked by every chip above it (with Core i9's having 20MB L3 cache), even ones at half the clock speed.

 

image.png.c4853ad283a4cf6d2d9ef3a8d5554d48.png

Please note there is only one sample in the benchmark list, so it's accuracy will be in question. At any rate the note the single-thread rating is roughly 30% difference despite only a 300Mhz difference. Also note the price, 16x more expensive. So 8.7x more performance for 16x the price. Another thing to point out, in reference to the first question, note that the Celeron is only 2 cores (2 threads) but the highest end part is 10 cores with 20 threads (20 logical cores), so even though it tells the OS it has 20 threads, it only has 8.7 performance of the lowest end part that only has 2 cores period. So do the math there and that's like having 17.4 celeron 3.4Ghz cores, not 20.

 

 

Absolutely inadvisable.

 

Your project requirements may vary, but Autocad 2018 and similar products care more about the memory bandwidth on the dedicated GPU part, of which that means a dedicated GPU (dGPU) that far outstrips the integrated graphics is required. Even a 15" "gaming" laptop is not really powerful enough to use AutoCAD.

 

And I have to say this over and over on this forum, but CAD software will just plain melt integrated graphics, it doesn't matter if the iGPU is so worthless than the CAD software runs entirely on the CPU, the fact that it requires high levels of memory bandwidth on the GPU is why a Geforce x60 tier (eg GTX 1060, 1660, 2060 etc) or Quadro x1000 tier is needed just to get within spitting range of the "recommended" spec. The recommended spec is usually not available in anything but 17" laptops, and even in 17" laptops, the laptops cooling fans will be maxed out the entire time the software is running. When people use integrated graphics with CAD, the performance is so substandard that the business is literately losing money on your degraded productivity. If your company is unwilling to get you a 17" CAD laptop or desktop with a Quadro x3000 part in it, the business is probably walking away from money because they won't supply their engineers with productive hardware.

 

When IT staff then try to cheap out with docking stations that don't actually use Displayport alt-mode the staff lose even more productivity when the CAD software will not use the dGPU with the Displaylink (not displayport) software GPU presented by the cheaper docking station.

 

Like it might be different for something like solidworks (which is more engineering of individual 3D objects) but when you're working on massive projects like buildings, stores, towers, manufacturing plants, etc, being cheap about your engineers is just going to lose you bids on things when you can't deliver on time.

 

Thanks guys you've been a great help, but sadly I guess I'll have stick with 6 cored for a friend's budget system since he uses autocad. Thought could get him by with a desk-mini for his work and stick with his Haswell with his gaming system. Thanks 

Time to do me some planning. AMD here I come. 

Silent Cerberus - Fractal Design Core500,Ryzen R7 3700x,Scythe Fuma 2,Gigabyte Aorus B550i Pro AX,Crucial Ballistix Elite 3200mhz cl16 16gb,Adata XPG SX8200 nvme 512gb m.2 ssd,Lexar NS100 1tb ssd,Red Dragon RX5700 8gb(bio flashed 5700xt),Corsair SF600 SFX 600w 80+ platinum F-Modular. 

 

Hades - dO.Ob Look into my eyes and see the dark eternal abyss that is your soul and pay for your hidden sins. 

Cerberus - the hell hound that guards the gates of hell. Once you enter you'll never escape. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, StrawberryShortCakes said:

 

Thanks guys you've been a great help, but sadly I guess I'll have stick with 6 cored for a friend's budget system since he uses autocad. Thought could get him by with a desk-mini for his work and stick with his Haswell with his gaming system. Thanks 

Time to do me some planning. AMD here I come. 

Basically if they are doing AutoCAD, no integrated GPU will ever be performative enough. You may get away with it until it your project hits a certain level of complexity, and then it just gets exponentially harder to use. Autocad's recommended requirements are the same as a (mobile) Quadro 3000 part.

 

https://knowledge.autodesk.com/support/autocad/troubleshooting/caas/sfdcarticles/sfdcarticles/System-requirements-for-AutoCAD-2020-including-Specialized-Toolsets.html #AutoCAD

Quote
Display Card Basic: 1 GB GPU with 29 GB/s Bandwidth and DirectX 11 compliant
Recommended: 4 GB GPU with 106 GB/s Bandwidth and DirectX 11 compliant

29GB/s is higher than most integrated GPU's, which get between 12 and 20, with even the best iGPU part among laptops I benchmarked only hitting about 24, also the entire shared video memory aspect kinda tanks it anyway then.

 

Like most of the GTX 1050Ti is the first part that actually hits the recommended (at 112GB/s) spec. But you need a 4GB model. All the 16xx parts and 20xx parts meet or blow away the Recommended requirements.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×