Jump to content

The End of CPU Advancement on our Doorstep (Moore's Law and the 7nm Barrier) Discussion

If hardware advancement does stop, perhaps it'll cause people to care more about improving efficiency in software.  Too often I think people make it "good enough" and then move on and don't bother optimizing more than they need to achieve what they wanted.  I mean, why bother if you'll just get more powerful hardware soon?  For that reason, for the first few years, it wouldn't be terrible thing imo xD

 

For evidence I point to how differently an old game and a new game with graphics turned down compare in terms of performance and appearance.  Older games tend to have higher "appearance / fps" in my experience, since at the time, that's all the hardware they had available so if they wanted to make it look good, they had to actually try.  Now, they just test it on the weakest machine they have laying around, which often is still much more powerful, and if it works there it's good enough.

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

i dont get the whole nm aspect of CPUs, so i'l ask; why is the 7nm "barrier" such an issue, when we hit 7nm why don't we just start making CPUs bigger/multiple dies like thread ripper etc? or does it not work like that?

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

Speculative multithreading will save us 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, gabrielcarvfer said:

The nm thing is usually the size of the transistor gate, that allows current to flow between the source and drain...

Finally someone who understands lol...

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, TechyBen said:

Not for long. I assume we are already near the limit of this.

Yep. I still agree with new ISAs from earlier. I mean say we create x92 or x128-bit instruction sets for CPUs. Man the improvements could be astounding!

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

While it would be VERY interesting to see how a 5nm CPU would be made, aside from that, I would not purchase one. At least not any time soon, considering an average consumer would not have the need for such a CPU.

 

Very good read though!

Link to comment
Share on other sites

Link to post
Share on other sites

Just for the sake of it, what are the odds that we may see DIMM style "daughterboard" CPUs, kinda like the old Pentium IIs?

Could be an option to increase actual die size and not suffer with as much space limitation on the MBs.

pentium-snail-hires.jpg.397a1e7b35e25a3acd78fd7ec2ff88f3.jpg

 

[snail pictured for no particular reason]

Project Diesel 5.0: Motherboard: ASRock Fatal1ty X370 Professional Gaming /// CPU: Ryzen 5 3600X  /// CPU Cooler: Scythe Ninja 5 /// GPU: Zotac AMP Extreme RTX 2070 /// RAM: 2x 16gb G.Skill Ripjaws V @3200mhz /// Chassis: Lian Li Lancool One Digital (black) /// PSU: Super Flower Leadex III 750w /// Storage: Inland Premium 1TB NVME + Toshiba X300 4TB

 

Peripherals: Mice: Cooler Master MM720 /// Keyboard: Corsair K70 MK2 SE (Cherry Silver), Blitzwolf BW-KB1 (Gateron Reds) /// Monitor: Acer XZ320Q 32' (VA, 1080p @240hz) /// AMP: Topping PA3 (Onkyo Integra A-817XD undergoing restoration) /// DAC: Weiliang SU5 /// Speakers: AAT BSF-100 /// Mike: Alctron CS35U /// Headphones: Blon B8, ISK MDH-9000

 

Living room: TV: Samsung QLED Q7FN 55' 4k /// Amplifier: Denon AVR-X2400H /// Speakers: DALI Zensor 7 /// Consoles: Sony PS4 Pro 1TB, Sony PS3 500gb /// LD/CD/DVD: Pioneer DVL-909 /// Power Supplies: Upsai ACF-2100T + GR Savage CDR2200EX

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, NMS said:

While it would be VERY interesting to see how a 5nm CPU would be made, aside from that, I would not purchase one. At least not any time soon, considering an average consumer would not have the need for such a CPU.

 

Very good read though!

Agreed, especially with the pretty much guarenteed 100%+ increase in price for 5nm CPUs due to manufacturing difficulties.

 

And thank you, I put more thought into this thread than Ive ever put into any LTT topic!

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Chevy_Monsenhor said:

Just for the sake of it, what are the odds that we may see DIMM style "daughterboard" CPUs, kinda like the old Pentium IIs?...

Oh I totally forgot about those...

 

Hmmmm, you could really be onto something there!

 

EDIT: The only issue I could really see there is heat. I mean how do you effectively cover that thing with a CPU cooler? Especially when its mounted into its slot and the bottom area is covered up? And then what about adding more pins to be competitive with todays CPUs?

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Chevy_Monsenhor said:

Just for the sake of it, what are the odds that we may see DIMM style "daughterboard" CPUs, kinda like the old Pentium IIs?

Could be an option to increase actual die size and not suffer with as much space limitation on the MBs.

<snip pic>

 

[snail pictured for no particular reason]

This could happen, however I suspect not - the main reason this worked reasonably well is the (relatively) low frequency that everything was running at back then. Now with things running in the multi GHz range it would be adding an antenna effect that would make things shall we say interesting to work with the signals around the daughter-board. The PII already had some issues with being a huge ball of electronic interference back in the day. When I walked into a room with more than 2-3 PIIs in it my cell phone would cut out completely until I got about 20' away from the center of the group of computers. Of course this was always nice at work because it provided a place where the engineers could sit where nobody could actually call them... 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, AncientNerd said:

This could happen, however I suspect not - the main reason this worked reasonably well is the (relatively) low frequency that everything was running at back then. Now with things running in the multi GHz range it would be adding an antenna effect that would make things shall we say interesting to work with the signals around the daughter-board. The PII already had some issues with being a huge ball of electronic interference back in the day. When I walked into a room with more than 2-3 PIIs in it my cell phone would cut out completely until I got about 20' away from the center of the group of computers. Of course this was always nice at work because it provided a place where the engineers could sit where nobody could actually call them... 

Good point, that didn't come into my mind. 

But yet, i'm pretty sure engineers could come with a way to make this work...

Project Diesel 5.0: Motherboard: ASRock Fatal1ty X370 Professional Gaming /// CPU: Ryzen 5 3600X  /// CPU Cooler: Scythe Ninja 5 /// GPU: Zotac AMP Extreme RTX 2070 /// RAM: 2x 16gb G.Skill Ripjaws V @3200mhz /// Chassis: Lian Li Lancool One Digital (black) /// PSU: Super Flower Leadex III 750w /// Storage: Inland Premium 1TB NVME + Toshiba X300 4TB

 

Peripherals: Mice: Cooler Master MM720 /// Keyboard: Corsair K70 MK2 SE (Cherry Silver), Blitzwolf BW-KB1 (Gateron Reds) /// Monitor: Acer XZ320Q 32' (VA, 1080p @240hz) /// AMP: Topping PA3 (Onkyo Integra A-817XD undergoing restoration) /// DAC: Weiliang SU5 /// Speakers: AAT BSF-100 /// Mike: Alctron CS35U /// Headphones: Blon B8, ISK MDH-9000

 

Living room: TV: Samsung QLED Q7FN 55' 4k /// Amplifier: Denon AVR-X2400H /// Speakers: DALI Zensor 7 /// Consoles: Sony PS4 Pro 1TB, Sony PS3 500gb /// LD/CD/DVD: Pioneer DVL-909 /// Power Supplies: Upsai ACF-2100T + GR Savage CDR2200EX

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, AncientNerd said:

This could happen, however I suspect not... 

Huh, never knew the interference was that bad. Lol no cell phones at work yay!

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Chevy_Monsenhor said:

Good point, that didn't come into my mind. 

But yet, i'm pretty sure engineers could come with a way to make this work...

 

8 minutes ago, WallacEngineering said:

Huh, never knew the interference was that bad. Lol no cell phones at work yay!

Well, to be fair it was early days for cell phones so I suspect they didn't spend a lot of thought about the whole "radiates on cell frequency" thing. 

 

But yea, it really irritated the PMs that to reach the Eng at their desks they had to either walk over or send email 9_9.

Link to comment
Share on other sites

Link to post
Share on other sites

You know guys, I really enjoy all the info and ideas that have been brought forward, this is what I would consider to be good community discussion.

 

Please continue throwing you thoughts out there because they are really interesting!

 

However, think about the fact that while all these things, weather it be carbon nanotubes, quantum computing, ISA's, slotted CPUs, bigger CPUs, or any other discussed solution - These are all massively game-changing solutions, that would require massive R&D, time, and manufacturing maturity before ever coming to fruition.

 

When you take a look at all the different ideas and just how crazy and different they all get, you can pretty much come to a solid conclusion:

 

Its gonna happen. None of these ideas are even close to ready and wont be ready by 2020-2022, and the either massive slowdown or complete hault of CPU advancement seems to be inevitable at this point in time. It may not last as long as I think it will but regardless, its looking more and more like 2020-2022 will be the BEST time to build a PC in the industry's history.

 

So if I were you guys, I would start saving up...

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, WallacEngineering said:

-Snip-

Wrong button, whoops

 

Cant wait for 40-Thread Threadripper though, theres a very good chance of it on Zen 3 7nm+!

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, WallacEngineering said:

You know guys, I really enjoy all the info and ideas that have been brought forward, this is what I would consider to be good community discussion.

 

Please continue throwing you thoughts out there because they are really interesting!

 

However, think about the fact that while all these things, weather it be carbon nanotubes, quantum computing, ISA's, slotted CPUs, bigger CPUs, or any other discussed solution - There are all massively game-changing solutions, that would require massive R&D, time, and manufacturing maturity before ever coming to fruition.

 

When you take a look at all the different ideas and just how crazy and different they all get, you can pretty much come to a solid conclusion:

 

Its gonna happen. None of these ideas are even close to ready and wont be ready by 2020-2022, and the either massive slowdown or complete hault of CPU advancement seems to be inevitable at this point in time. It may not last as long as I think it will but regardless, its looking more and more like 2020-2022 will be the BEST time to build a PC in the industry's history.

 

So if I were you guys, I would start saving up...

Maybe they won't be ready, the thing is all of the companies have know this is coming for the last 10-15 years so we don't know what R&D has already been done. There are already some base solutions (i.e., Ryzen/Threadripper) that extend the life of the current tech by multi-die systems and there are indications in press releases that Intel and NVidia are moving in the same direction. So that may solve the next few generations, then we will see where they go next. We already know that Broadcom and Qualcom make extensive use of GaAs in their communications processors so this is a mostly solved problem, but is is solved well enough for CPU/GPU sized chips? Maybe, maybe not but it is solved and those communication processors are both getting smaller features and larger die sizes both of which approach CPU sizes for the largest of them now in production. So...I would say close if not solved for CPU, probably not for GPU yet.

 

As for the more exotic I suspect that will be some odd breakthrough that can't be predicted. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, WallacEngineering said:

Yep. I still agree with new ISAs from earlier. I mean say we create x92 or x128-bit instruction sets for CPUs. Man the improvements could be astounding!

I am not sure that's how bit depth helps computing.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not sure if companies will just come to a full stop, but what I do suspect is that AMD and Intel will try to work a lot more on the software side trying to offer better solutions towards parallel work for everyday apps to benefit, not just specialized ones.

 

I know the argument it's usually that the level of complexity just wouldn't make that work but once they don't have to spend as much R&D on the physical side I believe they will devote a lot more into advancement techniques for software so that they can move to mammoth sized Threadripper style chips, multiple CPU rigs for consumers, etc.

 

But it's almost impossible to accurately predict if this can even pan out so I could be just dead wrong.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/18/2018 at 4:14 AM, M.Yurizaki said:

Threadripper is two Zeppelin dies (I mean, four if you want to include the dummy ones) on a single package. It's not a single CPU.

 

Exactly. That's how you overcome the exponential increase in cost that would take place if you try to achieve the same through a monolithic design. The key is to have a fast/efficient enough interconnect, which the infinity fabric seems to be for now, but in any case future successors to it may be better.

 

15 hours ago, Ryan_Vickers said:

If hardware advancement does stop, perhaps it'll cause people to care more about improving efficiency in software.  Too often I think people make it "good enough" and then move on and don't bother optimizing more than they need to achieve what they wanted.  I mean, why bother if you'll just get more powerful hardware soon?  For that reason, for the first few years, it wouldn't be terrible thing imo xD

Man, the times I've heard "RAM is cheap anyways" as a response to random programs using unjustified amounts of memory, when not directly leaking due to a bug. "RAM is cheap", they said. Oh, the irony... :D 

 

You also see it often in discussions about programming languages, where you may see a lot of weight on how easy each language makes the life of the programmer, and not so much on how fast does the code run once written (when this concerned is not entirely dismissed because "it doesn't matter with current hardware").

 

Quote

And as much as software developers would like to get off their "lazy butts", there's a ton of other issues that have to be solved beyond coding the software. 

One of those issues is sometimes management. I'm not a software developer myself, but I know a few, and I've heard some horror stories about firm managers with barely any clue on programming, who sold chimeric projects to a customer that they can't quite explain what they're supposed to be, and oh, the time frame is this ridiculously close deadline, I'm counting on you, geniuses. (I'm sure you can find similar stories replacing promises to business customers with public commitments to unfeasible release dates).

The story usually ends with developers delivering some barely functional Frankenstein that they would send others to jail for, and expecting a wave of complaints and patching to fall on them further down the road...

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, WallacEngineering said:

Yep. I still agree with new ISAs from earlier. I mean say we create x92 or x128-bit instruction sets for CPUs. Man the improvements could be astounding!

No. They wouldn't. They'd probably be slower in fact. 

People forget that vector is a thing. 

If tech exists that can bring great performance without a radical change in computing ( different computing model, different materials), then it is adopted. In this case, you'd be adding adding transistors and the performance improvements would only  be felt in an incredibly small amount of applications. 

Most other things would be slower as well. 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, TechyBen said:

Not for long. I assume we are already near the limit of this.

Well, many core cpus have shown to be significantly mire efficient in power and area if you can code for them. A dynamic approach to multithreading could make use of that if we even get it to work. 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, gabrielcarvfer said:

There are a number of optimizations that you can make increasing the number of bits, including adding a ton of additional registers.

Well , not really.

You'd only be able to add architectural register if you had larger instruction sizes . And having a different instruction syntax would break compatibility.

Also , extra registers only help if you are register-starved , which isn't really the case . sure , x86  only has 4 32 bit general registers and double that in x64 mode ( if i recall correctly ) in their architectural register file ,which would be a constraint ; but all modern cpus have the ability to use a physical register file with register renaming : the skylake µarch has 180 integers Pregisters at it's disposal.

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

I think before we start looking into exotic methods of building CPUs, we should look at the current state of them and the software that runs on them.

 

For example, what if we got rid of x86 entirely and compiled directly to microcodes? Yes I know that sounds like a pipe dream, but there is some tax involved decoding x86 instructions into microcodes. And then what if we got rid of legacy features? That would eliminate a lot of conditional checks at the hardware level and perhaps even vastly simplify the processor architecture (for every conditional you have, you at least double the number of outcomes).

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, M.Yurizaki said:

I think before we start looking into exotic methods of building CPUs, we should look at the current state of them and the software that runs on them.

 

For example, what if we got rid of x86 entirely and compiled directly to microcodes? Yes I know that sounds like a pipe dream, but there is some tax involved decoding x86 instructions into microcodes. And then what if we got rid of legacy features? That would eliminate a lot of conditional checks at the hardware level and perhaps even vastly simplify the processor architecture (for every conditional you have, you at least double the number of outcomes).

The thing is we tried something like this "RISC" processors back in the late 1980s and early 1990s RISC was supposed to take over computing and change everything. Well it turns out that it is really really (and I mean really) hard to optimize code for a reduced instruction set and not slow things down dramatically. Like orders of magnitude dramatically like loss of speed in the divide by 1000 or 10000 times the speed. There is a reason microcode is a specialized field, its D@mn hard to do let alone do right. Trying to do a generalized compiler for microcode is even harder. I was on a couple of teams in the early 1990s that looked into this problem and after much spending and thrashing determined that it was not a good place to go (well it included the companies going under while we were burning $$$ trying to solve this type of problem).

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×