Considering they vastly outnumber GPUs and other accelerators in datacenters across the world, yes, they're typical.
And mind you consumer software just doesn't make use of advanced CPUs. Hell games still really haven't discovered vectorization despite SSE being 11 years old now. We can get 10x CPU performance improvements from Sandy Bridge on just by changing some industry standard code into AVX intrinsics. You wouldn't even need multithreading at that point. 1 core on a 2600K using AVX can outperform the entire 6950X using scalar but multithreaded code. That should have the community up in arms against bad development studio practices, and yet...
If consumer software truly necessitated those core counts, Intel would provide them cheaply, but such software really doesn't exist outside of the professional apps like the Adobe/Sony suites.