Jump to content

The End of CPU Advancement on our Doorstep (Moore's Law and the 7nm Barrier) Discussion

since all of the comments above are talking about tvs or something I'll get back in topic and ask- why they don't just make bigger CPUs like the ones from early 2000s?

ASUS X470-PRO • R7 1700 4GHz • Corsair H110i GT P/P • 2x MSI RX 480 8G • Corsair DP 2x8 @3466 • EVGA 750 G2 • Corsair 730T • Crucial MX500 250GB • WD 4TB

Link to comment
Share on other sites

Link to post
Share on other sites

You're assuming we'll stop at the status quo even if it means no extra performance .

 

More and more work WILL fall upon devs to extract performance.

there are ways to improve performance far beyond what we have today without radical changes in tech , if only we can make use of it.

 

There are 3 major ways of extracting performance in a processor : DLP , ILP and TLP . DLP ( data level parallelism ) has sort-of been used for a long time in SIMD tech like AVX , SSE and NEON . Using vector units , we can drastically speed up calculations on large arrays of data . even today , this is only somewhat used in general workloads .

 

TLP involves multi-threading in a processor ( multi core , SMT ) . If software developers can use these effectively , we can  drastically improve performance as well with many core technology . Unlike ILP , node benefits can be efficiently implemented by using multiple cores and density improvements yield direct performance gains.

 

The extraction of ILP through (super)pipelining , superscalar pipelines and OoOE with register renaming has allowed processors to increase in speed substantially since the 80's . But these technologies aren't necessarily  the most efficient at using transistors . According to some papers , doubling issue width in a processor only results in a 40% increase in performance . Out of order execution can help make use of more functional units , but we are continually increasing the size of ROB , schedulers and buffers in an effort to increase utilization . We're reaching the limits of how much ILP we can extract , and as such how fast we can make a single thread .That's why we need to go towards multi-threading .

 

All those structures used to extract more ILP aren't cheap. Using SMT to use a pipeline effectively is MUCH cheaper than implementing OoOE .The early atoms were a perfect example of this.

VLIW is another good example . With a good compiler , we can get rid of most if not all scheduling and dependency-checking hardware to make a smaller and cheaper core.We're moving more and more towards the gpu-everything , but that doesn't mean performance will stall.

 

 

 

 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, aezakmi said:

since all of the comments above are talking about tvs or something I'll get back in topic and ask- why they don't just make bigger CPUs like the ones from early 2000s?

i don't think it's a question of size, they can put more cores in it but that is irrelevant to most people. if core clock doesn't increase what's the point of having 100 cores for the average user.

Going for higher nm would make you buy a nuclear powered PSU to power it.

 

That's my take on the issue anyway.

.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, SteveGrabowski0 said:

The mid gen console refreshes happened because 4k TVs got really cheap at the same time gpus were able to be manufactured on 14nm nodes. The XBox One X doesn't have any more Jaguar cores than the original XBox One.

Huh I swore it did, I remember that from the Linus video where Luke reviews the One X. Ill have to look it up again.

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, aezakmi said:

since all of the comments above are talking about tvs or something I'll get back in topic and ask- why they don't just make bigger CPUs like the ones from early 2000s?

I personally think its efficiency and heat output. Intel and AMD are no longer interested in selling you a monster 200W+ TDP mainstream chip.

 

Because as always, improved efficiency is just as important to a lot of people as performance gains.

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Damascus said:

Dude you're getting hung up on 7/5nm way too much.  I'm willing to bet right now that before any kind of price inflation happens were going to see die size go up, followed by more efficient cpu designs, followed by a massive surge into some new technology (quantom computing, perhaps)

 

There will never be a $1000 Pentium (in 2018 dollars, accounting for inflation of course) because no-one will buy it and Intel only cares about sales.  They are more likely to stop selling new cpus for a decade than sell ludicrously overpriced ones

See that makes a little more sense.

 

The problem is though when the world needs to make a massive change to continue moving forward, we usually get hung up and run in to issues that make it not possible for awhile.

 

Im not saying advancement will end forever, just that we are quite likely to take a "long break" from it.

 

Then theres the point I added to OP: Why do we even really need to upgrade CPUs anymore? The ones we have are more than powerful enough for the foreseeable future, if you have recently bought a higher-end chip (i5-8600K+ or R5 1600X+).

 

Considering the fact that even the very first generation of Core i7s are still very powerful and relevant today, how long until say the 8700K becomes irrelevant?

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, WallacEngineering said:

Considering the fact that even the very first generation of Core i7s are still very powerful and relevant today, how long until say the 8700K becomes irrelevant?

My best bet is consumer quantum computers, Intel is already boosting r&d massively and that's what will eventually make every current processor irrelevant.

Want to custom loop?  Ask me more if you are curious

 

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, WallacEngineering said:

So INTEL THEMSELVES said in 2016 that transistors can only continue to shrink for about 5 more years. Guess what AMD has already announced for 2020-2021? Zen 3, their first 7nm+ CPU. Put two and two together and you get... four. Its as simple as that.

Silicon transistors. This doesn't include the use of different materials, or optimization of current transistors for better power and performance.

23 hours ago, WallacEngineering said:

Now the reason 7nm seems to be the physical limit is because beyond 7nm, the transistors would produce more heat than the power they would output, making the idea of 5nm CPUs an oxy-moron, or paradox. Now I believe that there is a small chance that scientists will figure out a way to produce 5nm CPUs, but the technology would be extremely difficult and expensive to manufacture, so much so that none of us consumers would be interested in upgrading from 7nm to 5nm CPUs.

It's because of leakage. This is the same issue we faced at 20nm, which Intel solved by going FinFET. We could totally see something similar happen again.

23 hours ago, WallacEngineering said:

This next part is purely my own speculation but its regarding the price of 5nm CPUs if we get there. So if we could purchase 5nm CPUs, they are likely to be so expensive that we will keep our 7nm CPUs instead. You think GPU pricing is bad right now? Try $1000+ entry level Pentium CPUs, or i9 extreme CPUs that cost tens of thousands of dollars. Ya, I don't know about you but I am staying WELL away from 5nm CPUs.

There's no way that would happen, because no one would buy anything, and Intel would be unable to make profits. This doesn't make sense.

23 hours ago, WallacEngineering said:

Now you may be thinking: "Well why not just increase the physical size of CPUs so more 7nm transistors can fit into it?" Well that is technically a great idea, but its got a few issues. One, there needs to be room in between the CPU and RAM for cooler mount clearance, so going much bigger than ThreadRipper isn't really possible, and if we start pushing the RAM slots out then we start changing the ATX form factor standard which trust me, that isn't going to happen. This would mean all cases, power supplies, and other accessories would need to be completely redesigned to fit these new motherboards, and all this work would be done for a technology that will only last a few more years anyways. The largest issue however would be heat output. The current i7-8700K is already a monster of a heat producer, and that's just a mainstream CPU. Imagine a CPU with more than double the 8700K's processing power! Heat production would likely be so intense that even the most expensive custom water loop solutions would struggle to cool it, and don't even THINK about tying your GPU(s) into that loop, not gonna happen, especially as GPUs are likely to face the same issues as CPUs.

If you were to use something like EMIB or an interposer to stuff a ton of dies together I'm pretty sure a substrate the size of that used on threadripper could house over 1000 mm^2 worth of silicon die.

23 hours ago, WallacEngineering said:

Say its the year 2020. Zen 3 is out and you decide to pick up the new (just guessing on the specs here) Threadripper 4950 X with 20 Cores, 40 Threads and can reach 6GHz with some overclocking. How much of that processing power are you EVER going to use? When do you think that kind of compute performance will become obsolete and REQUIRE replacement? Software is a good two decades behind fully utilizing high-end hardware as things currently sit. And thats not even talking about something like threadripper. Its likely to take two decades to fully utilize the i7 8700K in just about ANY software that isn't related to content creation or rendering!

GPUs are getting faster too, and so we need faster CPUs to power them for games. To add, faster storage in the future might need a faster CPU to be able to process all the data quick enough (it's already the case that using 1T vs 8T can make a pretty big impact on storage performance), and also faster CPUs at the same power means same performance CPUs at lower power, which is great for laptops, smartphones, your electric bill, noise, and thermals.

23 hours ago, WallacEngineering said:

Now lets look at current displays. Displays have already hit their "Moore's Law Wall"! A scientific test was conducted that revealed that even the healthiest human eye cannot depict the difference between 1440p and 4K displays from more than a few inches away from the screen. So unless you like to sit so close to your monitor or TV that you cant even see half the screen and like to burn out your retnas, then 4K is simply a marketing gimmick, and has no real benefit what so ever. Some of you may know that I actually own a Samsung MU6500 Curved Glass 4K Smart TV that I use as my monitor and you may be wondering why. Well, my previous TV was a 7-year-old, crappy, low quality, cheap Sanyo 1080p flat screen. The image quality was terrible, the colors were abysmal, and it was so old that I figured it would die soon, so I sold it before it kicked the bucket on me. Plus I bought the Samsung on Black Friday 2017 so $500 for a $1000 TV? Sure, sign me right up. This display is also WAY better in terms of gaming capability. When you select "Game Console" as the selected source, it disables "Motion Rate" interpolation, most of the graphics processing options, and even lowers input lag. I can honestly say I cant tell the difference between this TV and any high-refresh gaming monitor I have ever played on. Its THAT fast and smooth, way to go Samsung! Anyways, here is the article that explains the scientific test:

Nononononono. Maybe a 4k TV is fine for you, if it's like 49", but what if it's like 80"? What if it's on a 32" screen but you're sitting 3 or 4x closer?

22 hours ago, WallacEngineering said:

Remember, 3840x2160p is 8,294,400 pixels. Can you really sit there and tell me that your eyes are capable of picking out eight million different different objects on your 40-or-so inch monitor? Even your 80" TV? Try loading 8 million different images onto your screen at once and tell me if you can see anything at all.

Yeah, let's just use big numbers to make it seem wrong! Great idea!

 

The human eye contains over 120 million rods and 6 million cones. 8 million is many many times less than the number of rods the human eye has. Not only that, but the human eye doesn't sense in pixels but it sees lines: A diagonal line may actually appear jagged even if you can't discern individual pixels.

 

You're not loading 8 million images... you're loading 8 million blots of color. It's not like your monitor can display 8 million cat pics  

22 hours ago, WallacEngineering said:

EDIT: OK heres a good real-word example, so Im typing on my 4K TV right now. My current AM3 build runs a GTX 670 so it isn't really capable of 4K, even video lags a bit, so I turned down my Nvidia Control Panel Settings to 1920x1080 desktop resolution. Let Me switch to 3840x2160... Okay, so slight improvement, nothing I would consider worth spending money on. Now thats coming from 1080p, not 1440p. So let me change my settings to 2560x1440... Nope, no noticeable difference WHAT SO EVER.

 

The TV is the Samsung MU6500 49 inch Curved Glass 4K TV, and I am positioned exactly 79 inches from the center of the screen (deepest point in any curved display)

Maybe it's fine at 49" and at 80 inches away, but what if it's an 80" TV? What if you're like me, and sit less than a third of that distance away from my monitor?

20 hours ago, WallacEngineering said:

Splitting the die, hmmmm. Might help, but splitting the die means the two separate dies now need a way to communicate with one another. AMDs Infinity Fabric seems to work well, but it is slower than just having a single physical die. Therefore, if the communication bridge isn't any better than the CPU enhancements itself, then its redundant to split the die. I suppose we can hope that Infinity Fabric improves with time.

You could use something like EMIB or an interposer over which you do communications rather than a PCB, which should be much higher bandwidth and lower latency.

17 hours ago, WallacEngineering said:

Pretty much everyone already believes that 8K is a total gimmick and wont be worth the cost. So regardless of weather or not you think 4K is any better, we are still at the end of MEANINGFUL display resolution bumps.

Who believes this?!?

15 hours ago, WallacEngineering said:

If we are stuck with 7nm then efficiency cant increase, so power draw will get higher and higher on these larger and larger CPUs along with heat output, so how do we effectively cool them using traditional and affordable methods once they put out so much heat that custom water loops struggle with it?

Just because you're stuck on the same number doesn't mean efficiency can't go up. Through transistor optimizations (larger fins, for example) you can increase efficiency even while transistor size is about the same, and you could always increase IPC/core count while lowering clock speeds to help.

1 hour ago, WallacEngineering said:

Then theres the point I added to OP: Why do we even really need to upgrade CPUs anymore? The ones we have are more than powerful enough for the foreseeable future, if you have recently bought a higher-end chip (i5-8600K+ or R5 1600X+).

With software being better optimized for multiple cores I do foresee a high core count arms race in the future.

1 hour ago, Damascus said:

My best bet is consumer quantum computers, Intel is already boosting r&d massively and that's what will eventually make every current processor irrelevant.

For regular consumers, though, I don't see quantum computers becoming a thing for at least a few decades. Maybe for HPC, but quantum computers in their current state are too expensive, bulky, and power hungry (the cooling systems require tons of power) to be practical for most.

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, WallacEngineering said:

Huh I swore it did, I remember that from the Linus video where Luke reviews the One X. Ill have to look it up again.

It has a much stronger gpu but it's Jaguar cores at 14 nm clocked something like 31% higher than they were on the 28 nm original as well as I believe some DX12 functions in hardware and also minus the esram buffer the OG XBox One had.

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, DocSwag said:

With software being better optimized for multiple cores I do foresee a high core count arms race in the future.

Yeah, I didn't comment on that one but I honestly see my 6950x lasting longer than any mainstream cpu, probably through 9th gen Intel and until Zen hits 10c.  Cores are already seeing higher utilization and if the mainstream consumer has access to cheap hcc cpus the market will follow.

59 minutes ago, DocSwag said:
2 hours ago, Damascus said:

My best bet is consumer quantum computers, Intel is already boosting r&d massively and that's what will eventually make every current processor irrelevant.

For regular consumers, though, I don't see quantum computers becoming a thing for at least a few decades. Maybe for HPC, but quantum computers in their current state are too expensive, bulky, and power hungry (the cooling systems require tons of power) to be practical for most.

I'm aware, I'm going on the assumption that 

  1. We don't move away from silicon
  2. This so-called complete stop will last for decade(s)
  3. A ludicrous amount of money will be thrown at making it work

I honestly think it's more likely that we'll see the kind of progress outlined in this article:

 

https://www.cnet.com/news/life-after-silicon-how-the-chip-industry-will-find-a-new-future/

Want to custom loop?  Ask me more if you are curious

 

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, Damascus said:

Yeah, I didn't comment on that one but I honestly see my 6950x lasting longer than any mainstream cpu, probably through 9th gen Intel and until Zen hits 10c.  Cores are already seeing higher utilization and if the mainstream consumer has access to cheap hcc cpus the market will follow.

I'm aware, I'm going on the assumption that 

  1. We don't move away from silicon
  2. This so-called complete stop will last for decade(s)
  3. A ludicrous amount of money will be thrown at making it work

I honestly think it's more likely that we'll see the kind of progress outlined in this article:

 

https://www.cnet.com/news/life-after-silicon-how-the-chip-industry-will-find-a-new-future/

I was also going on the assumption that we either dont move away from silicon or it takes quite some time to find an effective alternative.

 

Again, not really trying to imply a complete stop for technology, just for us as consumers.

 

Assuming that these new technologies that computer science will try to implement are extremely expensive in their first few years (which is likely as every new technology is expensive when it is first implemented), we can basically expect to see most PC consumers, including enthusiasts, to no longer have any reason to purchase these new technologies until these technologies improve, mature, and lower in cost sometime in the future.

 

Edit: Perhaps this can be explained in another way. Take a look at every single "big leap forward" in man kinds history. When the microchip was first invented, it took many years of development and research before every day people like us were able to get an affordable computer in our homes. When the Automobile was first invented, it took many years before everyday people could afford one for themselves. It took us 20 years of testing space rockets to ever actually put an astronaut on the moon.

 

And that is exactly the point. When a REALLY big game-changing technology is invented, it usually takes at least a decade, if not longer; of research, testing, maturity, and manufacturing efficiency, before the vast majority of the consumer market ever gets a chance of actually owning this technology for themselves.

 

History has proven itself to repeat, we all know this. Intel JUST invested $5 Billion USD in 10nm manufacturing. This suggests thay more than likely they EITHER arent even researching what to do after transistors reach their physical limit, or their R&D on the subject is very limited at this time, which means come the 2020's, it probably wont be ready, and we will be forced to wait it out.

 

So what we are talking about then, is a strong possibility of another one of man kinds "big leaps" as far as computing is concerned, probably in the early 2020's, which is only a few years from now.

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

OP updated with this ^ explanation. Really I should have included this from the beginning, maybe people would not have been so confused lol

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

I think the most amusing part of this kind of thread is how nostalgic it makes me feel. I have lived through almost exactly the same discussion three times in my life. Once in the late 1970's/early 1980's, once in the early to mid 1990's and again starting about 4-5 years ago. The argument is the same "computers are as fast as they need to be"/"we are reaching the end of computer advancement because of the physical limits of the materials". Then the original poster goes into a long involved discussion about both why we can't possibly make CPU's (or memory or Disks) any faster or smaller due to the actual physical limitations of the materials that are currently in use. Then adds on how the existing computers are as fast as we could possibly need for the foreseeable future so we could just stop advancement of chip manufacturing and live with the current generation forever since we don't need better than (68000, 80286, 80386, 68010... fill in your own favorite). 

 

Sorry guys, I can't get excited about the physical limits of the material. I have heard it all before in the exact same arguments about the death of Moore's law, yes we will eventually run up against the limits, no I don't believe we are there yet. We are still on basic materials we haven't looked at anything more interesting than silicon, which is one of the easiest materials to work with, there are other options that I am sure are out there in the labs... 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, AncientNerd said:

I think the most amusing part of this kind of thread is how nostalgic it makes me feel. I have lived through almost exactly the same discussion three times in my life. Once in the late 1970's/early 1980's, once in the early to mid 1990's and again starting about 4-5 years ago. The argument is the same "computers are as fast as they need to be"/"we are reaching the end of computer advancement because of the physical limits of the materials". Then the original poster goes into a long involved discussion about both why we can't possibly make CPU's (or memory or Disks) any faster or smaller due to the actual physical limitations of the materials that are currently in use. Then adds on how the existing computers are as fast as we could possibly need for the foreseeable future so we could just stop advancement of chip manufacturing and live with the current generation forever since we don't need better than (68000, 80286, 80386, 68010... fill in your own favorite). 

 

Sorry guys, I can't get excited about the physical limits of the material. I have heard it all before in the exact same arguments about the death of Moore's law, yes we will eventually run up against the limits, no I don't believe we are there yet. We are still on basic materials we haven't looked at anything more interesting than silicon, which is one of the easiest materials to work with, there are other options that I am sure are out there in the labs... 

I still expect a break or major slowdown in CPU advancement just based on the fact that a major change in history always takes time. Perhaps Im over-estimating for how long or how drastic the effects will be but perhaps I am not. Only time will tell

Top-Tier Air-Cooled Gaming PC

Current Build Thread:

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, WallacEngineering said:

I still expect a break or major slowdown in CPU advancement just based on the fact that a major change in history always takes time. Perhaps Im over-estimating for how long or how drastic the effects will be but perhaps I am not. Only time will tell

It is possible that this time is different, but I can look back and see just in my lifetime exactly the same argument. Then go back to the late '60's/early '70's and you see the same thing with mini-computers, the late '50's/early '60's with main frames, basically the computer industry in an infant (still). There is lot of potential for new growth and innovation - it doesn't have to be the same companies or countries, in fact it wouldn't surprise me to see the next breakthrough come out of China, Korea or Brazil. It also wouldn't surprise me if the next breakthrough is something completely off in a different direction that the main line chip companies are not looking - see IBM in the 50's/'60's. DEC in the '60's/'70's, Intel/Motorola/TI in the '70's/'80's - we are past due for some small unknown to have a breakthrough that disrupts the computer industry.     

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/10/2018 at 7:20 PM, mr moose said:

I started a thread about this a while back in the CPU section, people expressed the same concerns, but the reality is are we going to hold onto all legacy forever?  I think a new OS and a new CPU uArch (maybe purely 64bit with only essential instructions) could be a start.  

 

EDIT: I mean people are going to have to accept that X86 on 32bit won't be around forever and everyone should start planning.  Who knows how many issues there are lurking in the older stuff just waiting to be exploited. And we need CPU real estate if we can't find an alternative to silicon.

You know stripping the X86 set out isn't a bad idea, you could actually allow for this in software - imagine this, Intel increases the performance of the CPUs dramatically by removing the X86 instruction set, allowing for more space to crank up the performance but only for 64it. MS e.t.c. could then perhaps emulate the X86 portion for the few apps that people still use that require it. Hopefully any negative effect on performance from the emulation is negated by the increased performance overall, then perhaps X86 apps would run like they always have, but X64 could take advantage of the newer higher speeds?

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Phantasm Exterminator said:

You know stripping the X86 set out isn't a bad idea, you could actually allow for this in software - imagine this, Intel increases the performance of the CPUs dramatically by removing the X86 instruction set, allowing for more space to crank up the performance but only for 64it. MS e.t.c. could then perhaps emulate the X86 portion for the few apps that people still use that require it. Hopefully any negative effect on performance from the emulation is negated by the increased performance overall, then perhaps X86 apps would run like they always have, but X64 could take advantage of the newer higher speeds?

We did successfully move away from dos.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think the physical limit is anything to do with heat or power or tech limitations.

 

It's because of how transistor switches work, they rely on bridging gaps with electrons to make connections (turning a 0 to a 1) below around 5nm quantum tunneling becomes an issue and electrons can fill gaps they're not supposed to fill.

 

I think we'll start seeing bigger sockets pretty soon.

Link to comment
Share on other sites

Link to post
Share on other sites

For ~7 nm you have to use electron beams (technically a type of particle accelerator).  It's slow because you are literally drawing/writing the pattern in your resist material as opposed to photolithography which prints an entire image.  E-beam lithography is kind of like a super high tech etch-a-sketch.

 

As others have pointed out there isn't a lot of need for 10+ core CPUs for what most people are doing.  We might see CPUs becoming more like GPUs (truly massive parallel computing).  The best consumer grade CPUs can run at about 1 TFLOPS, but high end GPUs are ~12 TFLOPS.

 

Compound semiconductors might start making an appearance.  They were predicted a long time ago and Cray Research planned a GaAs based supercomputer back in the late 1980s - early 1990s.  The silicon industry had a few more tricks up their sleeve and stayed well ahead.  Compound semiconductors have much higher charge carrier mobilities which means that clock speeds can go much higher if you're using GaAs or InP.  Your cell phones all have GaAs circuits in them for communication.  Imagine a 100+ GHz CPU!!!

 

I'm not sure consumers are going to care so much about feature size if something operates 10-20x faster (just my $0.02)

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, LED_Guy said:

For ~7 nm you have to use electron beams (technically a type of particle accelerator).  It's slow because you are literally drawing/writing the pattern in your resist material as opposed to photolithography which prints an entire image.  E-beam lithography is kind of like a super high tech etch-a-sketch.

 

As others have pointed out there isn't a lot of need for 10+ core CPUs for what most people are doing.  We might see CPUs becoming more like GPUs (truly massive parallel computing).  The best consumer grade CPUs can run at about 1 TFLOPS, but high end GPUs are ~12 TFLOPS.

 

Compound semiconductors might start making an appearance.  They were predicted a long time ago and Cray Research planned a GaAs based supercomputer back in the late 1980s - early 1990s.  The silicon industry had a few more tricks up their sleeve and stayed well ahead.  Compound semiconductors have much higher charge carrier mobilities which means that clock speeds can go much higher if you're using GaAs or InP.  Your cell phones all have GaAs circuits in them for communication.  Imagine a 100+ GHz CPU!!!

 

I'm not sure consumers are going to care so much about feature size if something operates 10-20x faster (just my $0.02)

This...basically before the last break-through to drop the scale of individual components there was a lot of interest in switching completely to GaAs as a substrate, but as you state there are some advantages and disadvantages to both, however if we do hit the limits of Si then there may be a switch to GaAs in general rather than just in the high frequency parts it is used in now. Although that will raise the costs as it is significantly more expensive as a material to process and has had traditionally much lower yields than Si, and is a is a more expensive material to actually make dies out of...all of which add up to increased cost of chips if there is a general switch to GaAs unless one or more of these change.  

Link to comment
Share on other sites

Link to post
Share on other sites

Something that hasn't been mentioned yet is that there are alot of architectural changes that could be changed to increase performance. Things like managing multicore processors are still very hard problems that are being worked on by 1000's of people. There are OOE algorithms that haven't really been implemented in hardware yet, that don't require compiler support for Out of Order Execution. 

There are even possibilities to speed up even the simplest components. Perhaps there are some ISAs that haven't been explored yet that could be better than x86 (almost anything could be better). How about the security problems in our processors?

I've been thinking lately about how it might be possible to build a microprocessor out of a network of smaller, purpose built microprocessors (nano processors?). This might make it much easier to implement certain things, or easier to change things with updates, because, for example, the control unit wouldn't be tied to knowing how the ALU works, all it would need to know is that it supports whatever arithmetic and logic instructions, and forward those to the correct place. Something like this would mean that engineers could essentially just "swap out" components when a better one comes along, as long as the components share the same communication protocols. It also means that experimentation could happen much easier, since changing something about how the ALU or FPU works doesn't propogate changes all throughout the processor.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

Hasn't it been known for some years now that Silicon at least in itself is getting close to reaching a point where Gallium or some combination of Gallium will be needed to get smaller than 7nm?

Samsung has a 10nm process. Interesting Intel has been struggling to get below 14nm.

AMD has plans for a 12nm and eventually a 10nm process. Guessing Samsung will likely be the fab for at least the future 10nm chips. Idk if TSMC or GloFlo can go below 14nm yet. Since I'm guessing those are AMD's two major fabs still.

 

Edit: Not a lot of info on the foundries that fab AMD chips lol

But it looks like GloFlo and TSMC will be able get small enough.

a Moo Floof connoisseur and curator.

:x@handymanshandle x @pinksnowbirdie || Jake x Brendan :x
Youtube Audio Normalization
 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×