Jump to content

Rumour: Intel Core 9000 series to get more threads

porina
12 minutes ago, Bit_Guardian said:

Ahem, BS. Games have been so far behind in optimisation regardless of core count it's not even funny. Thread Pools have only recently (in the last 15 months or so) begun replacing explicit threading in games for the various physics, networking, and AI routines. Lock-free implementations of the component systems to squeeze out even more performance in multi-frame rendering pipelines are in their infancy. That was all possible to do without more cores, and the performance benefits at 4 cores are palpable even then. Then there's all the vectorisation work replacing all the old SSE code with AVX/2 which provides double the performance and sometimes more for those code paths.

 

It's not the lack of more than 4 cores being mainstream was a valid excuse, especially since Intel invented one of the cleanest, simplest multithreading frameworks for C/C++ in 1999 (OpenMP) and has been improving it every year since to support heterogeneous offloading, atomics, hands-off lock mechanisms, etc.. The development houses are the ones to blame here.

 

Nah, YT is lightweight unless you're watching it on Chrome.

Or maybe consoles have something to do with it....

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, MyName13 said:

Or maybe consoles have something to do with it....

Doubtful. The UE4 code from the time the PS4 launched until roughly last August was still using explicit threading. Unless some development houses got unreleased branches of the codebase, that's not what forced the issue either. 

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, Bit_Guardian said:

What I want is a 6/8C successor to the 5775C, with 72/96 graphics cores or an integrated Vega/Navi die on the new DDR5 platform. I'd be set pretty much for life.

I'd like to see Crystal Well return to desktop also, for compute uses and not GPU. The 128MB eDRAM acting as L4 cache in the 5675C/5775C helped out a lot given low bandwidth of DDR3. I think it was only rated at 50GB/s so that is matched by high speed dual channel DDR4 now, but if they could double it again it would be very interesting.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

I'd like to see Crystal Well return to desktop also, for compute uses and not GPU. The 128MB eDRAM acting as L4 cache in the 5675C/5775C helped out a lot given low bandwidth of DDR3. I think it was only rated at 50GB/s so that is matched by high speed dual channel DDR4 now, but if they could double it again it would be very interesting.

Eh, I'd be happy with a 2/4GB HBM2/3 cache.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Bit_Guardian said:

Eh, I'd be happy with a 2/4GB HBM2/3 cache.

Depends on how it is connected. If dedicated to GPU use, it won't help in general CPU compute. My ideal case would be L4 victim cache arrangement like Crystal Well. Also, isn't HBM supposed to be good in bandwidth but relatively poor in latency? 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/27/2017 at 8:05 AM, Essence_of_Darkness said:

I said if rumour is true, but still Ryzen is huge jump in my workload cause I am still on Phenom 1055T :P

Going from a 4GHz 1090T to a stock 3.7/4.1 R5 1600X is pretty much twice as fast overall.

SFF-ish:  Ryzen 5 1600X, Asrock AB350M Pro4, 16GB Corsair LPX 3200, Sapphire R9 Fury Nitro -75mV, 512gb Plextor Nvme m.2, 512gb Sandisk SATA m.2, Cryorig H7, stuffed into an Inwin 301 with rgb front panel mod.  LG27UD58.

 

Aging Workhorse:  Phenom II X6 1090T Black (4GHz #Yolo), 16GB Corsair XMS 1333, RX 470 Red Devil 4gb (Sold for $330 to Cryptominers), HD6850 1gb, Hilariously overkill Asus Crosshair V, 240gb Sandisk SSD Plus, 4TB's worth of mechanical drives, and a bunch of water/glycol.  Coming soon:  Bykski CPU block, whatever cheap Polaris 10 GPU I can get once miners start unloading them.

 

MintyFreshMedia:  Thinkserver TS130 with i3-3220, 4gb ecc ram, 120GB Toshiba/OCZ SSD booting Linux Mint XFCE, 2TB Hitachi Ultrastar.  In Progress:  3D printed drive mounts, 4 2TB ultrastars in RAID 5.

Link to comment
Share on other sites

Link to post
Share on other sites

If a "9700k" is truly 8c/16t, I think it will actually be time to upgrade from the 4790k. Gonna keep my ear to the ground on this one while still eyeing Zen2

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

 RGB Build Post 2019 --- Rainbow 🦆 2020 --- Velka 5 V2.0 Build 2021

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018 --- Project ITNOS

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX750M    Cooling Cryorig H7 with NF-A12x25

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Phate.exe said:

Going from a 4GHz 1090T to a stock 3.7/4.1 R5 1600X is pretty much twice as fast overall.

You think thats bad? Imagine what gains id get from going from an A6 3650 overclocked to 3.2 Ghz (Still not enough lmao) to a R3 1200

Primary Laptop (Gearsy MK4): Ryzen 9 5900HX, Radeon RX 6800M, Radeon Vega 8 Mobile, 24 GB DDR4 2400 Mhz, 512 GB SSD+1TB SSD, 15.6 in 300 Hz IPS display

2021 Asus ROG Strix G15 Advantage Edition

 

Secondary Laptop (Uni MK2): Ryzen 7 5800HS, Nvidia GTX 1650, Radeon Vega 8 Mobile, 16 GB DDR4 3200 Mhz, 512 GB SSD 

2021 Asus ROG Zephyrus G14 

 

Meme Machine (Uni MK1): Shintel Core i5 7200U, Nvidia GT 940MX, 24 GB DDR4 2133 Mhz, 256 GB SSD+500GB HDD, 15.6 in TN Display 

2016 Acer Aspire E5 575 

 

Retired Laptop (Gearsy MK2): Ryzen 5 2500U, Radeon Vega 8 Mobile, 12 GB 2400 Mhz DDR4, 256 GB NVME SSD, 15.6" 1080p IPS Touchscreen 

2017 HP Envy X360 15z (Ryzen)

 

PC (Gearsy): A6 3650, HD 6530D , 8 GB 1600 Mhz Kingston DDR3, Some Random Mobo Lol, EVGA 450W BT PSU, Stock Cooler, 128 GB Kingston SSD, 1 TB WD Blue 7200 RPM

HP P7 1234 (Yes It's Actually Called That)  RIP 

 

Also im happy to answer any Ryzen Mobile questions if anyone is interested! 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/27/2017 at 7:01 AM, nexus6 said:

I appreciate that you spelled "rumour" correctly.

It's color, flavor, and rumor. Thank you very much.

On 11/27/2017 at 7:01 AM, nexus6 said:

On the news - means nothing without pricing.

More importantly, the damn thing isn't even out yet. 
Even more silly is they barely are getting their current shit out...

 

Funny how all it takes for Intel to actually make any change is for AMD to scare it with fresh new architecture and more competitive pricing.

 

Can't say it's a bad thing Intel is doing this, just rather annoying what threads are going to be like and the kinds of responses that will be had in them. 

 

But if this rumor is true, and whenever Intel does get the next CPUs out, it's not hard to imagine that i7s will be 8c/16t and the i5s will be either 8c/8t or 6c/12t and i3s being 6c/6t

 

Although if this was a product coming out sometime next year, which I'm guessing it will be then damn all you need is an i3 for most things. Difficult to say that about i3s of today and of the past, they can do a lot of things but they can be a limiting factor. 

a Moo Floof connoisseur and curator.

:x@handymanshandle x @pinksnowbirdie || Jake x Brendan :x
Youtube Audio Normalization
 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Bit_Guardian said:

Ahem, BS. Games have been so far behind in optimisation regardless of core count it's not even funny. Thread Pools have only recently (in the last 15 months or so) begun replacing explicit threading in games for the various physics, networking, and AI routines. Lock-free implementations of the component systems to squeeze out even more performance in multi-frame rendering pipelines are in their infancy. That was all possible to do without more cores, and the performance benefits at 4 cores are palpable even then. Then there's all the vectorisation work replacing all the old SSE code with AVX/2 which provides double the performance and sometimes more for those code paths.

 

It's not the lack of more than 4 cores being mainstream was a valid excuse, especially since Intel invented one of the cleanest, simplest multithreading frameworks for C/C++ in 1999 (OpenMP) and has been improving it every year since to support heterogeneous offloading, atomics, hands-off lock mechanisms, etc.. The development houses are the ones to blame here.

 

Nah, YT is lightweight unless you're watching it on Chrome.

sure they have lots to optimize, but isnt easier for them to brute force it with more cores, my guess is that optimizations wont be many as devs are being forced to release games as soon as possible, so most of them will probably just pile more npcs and features on top of the old code, like they have been doing, so you will need more cores

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

Depends on how it is connected. If dedicated to GPU use, it won't help in general CPU compute. My ideal case would be L4 victim cache arrangement like Crystal Well. Also, isn't HBM supposed to be good in bandwidth but relatively poor in latency? 

It could still help in CPU "compute". Map-Reduce and database are way more bandwidth-reliant than transform-reliant, and that sort of basic analysis is highly vectorisable, but it's not feasible to move huge datasets from disk and memory to GPUs or other secondary accelerators unless you want to build the database in their memory pools from the start.

 

43 minutes ago, cj09beira said:

sure they have lots to optimize, but isnt easier for them to brute force it with more cores, my guess is that optimizations wont be many as devs are being forced to release games as soon as possible, so most of them will probably just pile more npcs and features on top of the old code, like they have been doing, so you will need more cores

Please see Amdahl's Law. The diminishing returns very quickly turn negative anyway. Most people only recognise one half of the equation. Here's a dumbed down explanation of how real HPC scaling by threads/cores works disguised as slinging mud at poor business management by one of the most eloquent development conference speakers you are likely to ever see.

And if you make the cheap choice solely because it's cheap without also simultaneously making the expensive choice because it's the right strategic choice, you shouldn't be in charge of any effort, let alone software development. More cores is not the correct answer until you can prove to yourself it is. The problem is the software, not the equipment it's running on, when it comes to the CPU side of the calculus.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Bit_Guardian said:

Yup. I expect Intel will make extensive use of EMIB for the next HEDT and server chip lineup if manufacturing cost is actually an issue.

 

What I want is a 6/8C successor to the 5775C, with 72/96 graphics cores or an integrated Vega/Navi die on the new DDR5 platform. I'd be set pretty much for life.

The word of the day is "Chiplet". That's where Intel is heading, as they can put more than enough cores on a package, but when you start talking 1000 mmdies and 1 CPU per wafer, it gets really nutty for price and just time wasted. But if you take build packages of 2-4 CPUs + Cache then "glue" them together on a special interposer, things start to get really interesting.

 

This should be coming in 2020/2021 with "Sapphire Rapids", DDR5 and a completely new x86 uArch design. It'll be Intel's biggest CPU space change in 20+ years.

1 hour ago, cj09beira said:

sure they have lots to optimize, but isnt easier for them to brute force it with more cores, my guess is that optimizations wont be many as devs are being forced to release games as soon as possible, so most of them will probably just pile more npcs and features on top of the old code, like they have been doing, so you will need more cores

It's the physics and backside stuff that can be easily be moved off of a primary worker thread. Also caching and pre-fetch activities. It can actually be far less optimized and let the game run better. But a huge chunk of the market is still dual-cores, so they've never optimized that way.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bit_Guardian said:

It could still help in CPU "compute". Map-Reduce and database are way more bandwidth-reliant than transform-reliant, and that sort of basic analysis is highly vectorisable, but it's not feasible to move huge datasets from disk and memory to GPUs or other secondary accelerators unless you want to build the database in their memory pools from the start.

The specific use cases I'm thinking about are comparable to Prime95 and Linpack, as they most closely resemble my interest areas. On consumer CPUs, the limitation is more on ram bandwidth than core potential. Dual core Intels are ok, quad cores require high speed ram to mitigate this. I haven't got a 6 core Coffee Lake yet but I'd expect that to be heavily bottlenecked. This will only get worse with wider AVX-512 deployment, rising core counts and stagnant ram bandwidth growth.

 

Having said that, as we go to 8 cores and beyond, the total L3 cache size starts becoming large enough to start mitigating ram bandwidth needs in itself, assuming they remain a ring structure and not the also bandwidth starved mesh used on Skylake-X.

 

1 minute ago, Taf the Ghost said:

The word of the day is "Chiplet". That's where Intel is heading, as they can put more than enough cores on a package, but when you start talking 1000 mmdies and 1 CPU per wafer, it gets really nutty for price and just time wasted. But if you take build packages of 2-4 CPUs + Cache then "glue" them together on a special interposer, things start to get really interesting.

I wonder if it is technically worthwhile to take a leaf from flash, and go vertical? You're adding more complexity in layers but that trades off with smaller area. Otherwise, in essence they do a Ryzen?

 

If some think things are bad enough now with software not making good use of multiple cores, it isn't going to get any better if NUMA-like architectures becomes standard. See Threadripper for an example of that today.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/27/2017 at 7:40 AM, The Benjamins said:

AM4 should support zen2 and possibly even zen 3 if their statement of 4 year support on AM4 is true. and AMD has had a habit of supporting socket longer then intel.

While Zen2 is supposed to be AM4, I think the real question would be if a new chipset will be necessary, and if so, will they do what Intel did where past CPUs will not run on it. Now if DDR5 support comes to AM4, then we will know for sure, but maybe zen2/3(Or whatever design brings DDR5) would have a dual DDR4/DDR5 controller and support current chipsets at least. (I don't have enough knowledge in processor design and chipsets, so maybe even a different memory controller would be enough to make Zen2/3 incompatible with x370)

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Taf the Ghost said:

The word of the day is "Chiplet". That's where Intel is heading, as they can put more than enough cores on a package, but when you start talking 1000 mmdies and 1 CPU per wafer, it gets really nutty for price and just time wasted. But if you take build packages of 2-4 CPUs + Cache then "glue" them together on a special interposer, things start to get really interesting.

 

This should be coming in 2020/2021 with "Sapphire Rapids", DDR5 and a completely new x86 uArch design. It'll be Intel's biggest CPU space change in 20+ years.

It's the physics and backside stuff that can be easily be moved off of a primary worker thread. Also caching and pre-fetch activities. It can actually be far less optimized and let the game run better. But a huge chunk of the market is still dual-cores, so they've never optimized that way.

Intel is working towards what IBM did with Power8 and their Centaurs. They are even starting to implement the same scalability and are trying to keep clocks around the same level. It's kind of funny how Intel and AMD both are starting bring features and doing things that IBM has already done.

 

Im waiting to see who will decide to implement massive multithreading with 4-way or 8-way SMT.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, porina said:

The specific use cases I'm thinking about are comparable to Prime95 and Linpack, as they most closely resemble my interest areas. On consumer CPUs, the limitation is more on ram bandwidth than core potential. Dual core Intels are ok, quad cores require high speed ram to mitigate this. I haven't got a 6 core Coffee Lake yet but I'd expect that to be heavily bottlenecked. This will only get worse with wider AVX-512 deployment, rising core counts and stagnant ram bandwidth growth.

 

Having said that, as we go to 8 cores and beyond, the total L3 cache size starts becoming large enough to start mitigating ram bandwidth needs in itself, assuming they remain a ring structure and not the also bandwidth starved mesh used on Skylake-X.

 

I wonder if it is technically worthwhile to take a leaf from flash, and go vertical? You're adding more complexity in layers but that trades off with smaller area. Otherwise, in essence they do a Ryzen?

 

If some think things are bad enough now with software not making good use of multiple cores, it isn't going to get any better if NUMA-like architectures becomes standard. See Threadripper for an example of that today.

CPUs produce magnitudes more heat. That's why they can't go vertical, but they also don't need to. We're headed towards the space between NUMA and monolithic dies. Chiplet is just that. You make your cores + cache on 10nm, make your Memory controller on 14nm and make your other SoC parts on 22nm. Certain products don't need to scale down when you don't need to redesign them for a new node. SoC-based networking has been around since... 65nm? Why does a new Quad Core need a fully redesigned 1 Gb network controller on the SoC when the previous one was more than enough? Well, because it's a completely new CPU design since it's a new node.

 

That's where Intel will fight back against AMD in that regard for cost. It has cost them a lot to get there, but it'll be well worth it.

 

8 minutes ago, tjcater said:

While Zen2 is supposed to be AM4, I think the real question would be if a new chipset will be necessary, and if so, will they do what Intel did where past CPUs will not run on it. Now if DDR5 support comes to AM4, then we will know for sure, but maybe zen2/3(Or whatever design brings DDR5) would have a dual DDR4/DDR5 controller and support current chipsets at least. (I don't have enough knowledge in processor design and chipsets, so maybe even a different memory controller would be enough to make Zen2/3 incompatible with x370)

I'd be really, really surprised if we got DDR5 on AM4. AMD's public roadmap lines up for Zen3 with DDR5 on a new PCH in 2020. If DDR5 is late, we might see a Zen2 refresh though.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Taf the Ghost said:

 AMD's public roadmap lines up for Zen3 with DDR5 on a new PCH in 2020. If DDR5 is late, we might see a Zen2 refresh though.

Well at least that's less reason for Zen2 to not support current chipsets. Since AMD only said they would only at least support AM4 until 2020 and Zen3 has a new PCH, I guess Zen3 would mark the start of AM5?

Link to comment
Share on other sites

Link to post
Share on other sites

find it weird that Intel drops every new gen so fast.

Ex frequent user here, still check in here occasionally. I stopped being a weeb in 2018 lol

 

For a reply please quote or  @Eduard the weeb me :D

 

Xayah Main in Lol, trying to learn Drums and guitar. Know how to film do photography, can do basic video editing

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Dylanc1500 said:

Intel is working towards what IBM did with Power8 and their Centaurs. They are even starting to implement the same scalability and are trying to keep clocks around the same level. It's kind of funny how Intel and AMD both are starting bring features and doing things that IBM has already done.

 

Im waiting to see who will decide to implement massive multithreading with 4-way or 8-way SMT.

I don't expect we'll get 4-way SMT, but you never know with AMD now. Any version of SMT is actually just a hardware-based prioritization scheme. The reason it works is down to the way computers operate and the fact they're normally waiting for commands rather than processing them. Though an 8-way implementation would be hilarious to see on a consumer part.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, tjcater said:

Well at least that's less reason for Zen2 to not support current chipsets. Since AMD only said they would only at least support AM4 until 2020 and Zen3 has a new PCH, I guess Zen3 would mark the start of AM5?

AMD is only going to support new desktop CPUs until 2019. 2020 releases will be Zen2 APUs, since they're running those a year behind their Desktop schedule. (Which makes business sense for AMD, but that's another topic.) AMD has been a tad cagey about what they mean by "support" for the platform because their roadmap makes it pretty clear they'll need a new socket for Zen3.

 

DDR5 is a big deal, so we'll have to see if it's still on schedule for being available in late 2019.  We're still at least 18 months away from knowing, for sure, if it'll be available for CPU releases in 2020.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Taf the Ghost said:

I don't expect we'll get 4-way SMT, but you never know with AMD now. Any version of SMT is actually just a hardware-based prioritization scheme. The reason it works is down to the way computers operate and the fact they're normally waiting for commands rather than processing them. Though an 8-way implementation would be hilarious to see on a consumer part.

I know and with how most of the developers (not just games) for consumer applications are with just multiple cores, throwing a curve ball of 4 or 8 way SMT would just mean one more thing for them to take into consideration. 

 

Actually now that I think about it. If they can implement it properly it wouldn't be a terrible idea although the resulting show from developers and marketers would be fantastic. I can see arguments now, "I have a 12 core 96 thread CPU""ya, well I have 24 cores and 48 threads". I'll go get popcorn now.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Dylanc1500 said:

I know and with how most of the developers (not just games) for consumer applications are with just multiple cores, throwing a curve ball of 4 or 8 way SMT would just mean one more thing for them to take into consideration. 

 

Actually now that I think about it. If they can implement it properly it wouldn't be a terrible idea although the resulting show from developers and marketers would be fantastic. I can see arguments now, "I have a 12 core 96 thread CPU""ya, well I have 24 cores and 48 threads". I'll go get popcorn now.

I'd be curious the actual uplift IBM got from those mass SMT setups. They run such specialized code that I doubt it'd apply to more normal programs, but it might be hilarious for AMD to go to 4-way SMT.  But I'd bet it's just not a performance uplift worth it, as it does take up die-space. And that performance is only there if you can peg the cores.

 

Now, when we're at 5nm in the ~2024 range and AMD has independent Server designs? That might be worth it.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Taf the Ghost said:

I'd be curious the actual uplift IBM got from those mass SMT setups. They run such specialized code that I doubt it'd apply to more normal programs, but it might be hilarious for AMD to go to 4-way SMT.  But I'd bet it's just not a performance uplift worth it, as it does take up die-space. And that performance is only there if you can peg the cores.

 

Now, when we're at 5nm in the ~2024 range and AMD has independent Server designs? That might be worth it.

IBM got some substantial uplifts with it. I can tell you this much, POWER9 is a monster and fantastic to work with. Honestly, no, it wouldn't be worth it in the consumer space unless they can bring costs down. That's being said if you are working with large dataset databases, nothing can compete currently. Especially in the I/O except z14, but that's a whole different beast (financial industries love them).

 

Im honestly getting a bit frustrated with node names not being more standard, not mention the fact most people (even within the industry) don't understand that node size isn't everything for how a node will perform.

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Dylanc1500 said:

I know and with how most of the developers (not just games) for consumer applications are with just multiple cores, throwing a curve ball of 4 or 8 way SMT would just mean one more thing for them to take into consideration. 

 

Actually now that I think about it. If they can implement it properly it wouldn't be a terrible idea although the resulting show from developers and marketers would be fantastic. I can see arguments now, "I have a 12 core 96 thread CPU""ya, well I have 24 cores and 48 threads". I'll go get popcorn now.

 

19 minutes ago, Taf the Ghost said:

I'd be curious the actual uplift IBM got from those mass SMT setups. They run such specialized code that I doubt it'd apply to more normal programs, but it might be hilarious for AMD to go to 4-way SMT.  But I'd bet it's just not a performance uplift worth it, as it does take up die-space. And that performance is only there if you can peg the cores.

 

Now, when we're at 5nm in the ~2024 range and AMD has independent Server designs? That might be worth it.

IBM has far better performance for transactions and data analytics, which scale very nicely with threads, but because of cache coherency requirements, scale poorly with additional cores. They're called scale-up workloads for this very reason, and it's why IBM runs 12C/96T 4.3GHz chips with 128MB L4 caches at a whopping 300W TDP (and there are key differences between their Z Architecture and Power 8/9 to account for perfect failover). They're unbeaten in the SAP business analytics and banking benchmarks for this reason.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Dylanc1500 said:

IBM got some substantial uplifts with it. I can tell you this much, POWER9 is a monster and fantastic to work with. Honestly, no, it wouldn't be worth it in the consumer space unless they can bring costs down. That's being said if you are working with large dataset databases, nothing can compete currently. Especially in the I/O except z14, but that's a whole different beast (financial industries love them).

 

Im honestly getting a bit frustrated with node names not being more standard, not mention the fact most people (even within the industry) don't understand that node size isn't everything for how a node will perform.

I wouldn't say nothing competes. Oracle Sparc M7 stood up incredibly well against Power 8. And With Intel's expansion of SMT to 4-way in KNL, there are actually some cases where those 72-core Xeon Phi provide some marvellous showing.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×