Jump to content

Has PC tech become stagnant? I think so.

Uttamattamakin
Go to solution Solved by Mira Yurizaki,

I would argue a lot of the apparent stagnation is simply because what we as consumers do with computers don't require a lot of power to reach acceptable levels of performance and quality. Especially when a lot of things moved to the web and we have phones to access the content, developers have to design things with even lower performance requirements in mind. Since we have apps and software that are meant to perform well on lower power devices, it makes sense that putting them on higher end hardware won't improve the apparent performance.

 

I'm sure a lot of people here haven't experienced the joys of actually worrying about if their PC will play an MP3 or not. Or that a 1 minute uncompressed WAV file (that wasn't even CD quality!) would eat all your RAM if you tried to work with it.

 

Also I would argue a vast majority of the performance boost from the 90s came simply from adding more speed. For example, the OG Pentium launched with a 60 MHz model. Two years later it had a 120 MHz model. Within 7 years in that time frame as well, we went from 60 MHz to 1GHz, a 16 fold increase. And while sure there were architecture improvements that helped, I'm not sure if those played a significant role in boosting general performance overall moreso than the clock speed bump. The speed bump from an i7-2600K to an i7-8700K assuming max turbo boost? 1.26x increase.

16 hours ago, Crunchy Dragon said:

Certainly not for the past decade, there have been massive improvements compared to PCs from 2008.

 

RTX as a whole is a big example. Also compare architectural improvements on both GPUs and CPUs. PCs have been steadily increasing over the years, but we're well past the point where each generation was exponentially better than its predecessor.

I dunno. I STILL run multiple LGA 775 machines and they're perfectly capable of running literally any modern program, even games, and it does it well. (That said, I do have an extreme processor installed in them.) 

 

I don't think PC improvements have been that large in the last decade. 

 

RTX is brand new. Sure, the research for it probably has been going on for a while, but that's only one example in a decade. Can you think of any more? 

Link to comment
Share on other sites

Link to post
Share on other sites

I mean hell, we STILL can barely run crysis at max settings with a single GPU system at 4k.

 

https://www.pcgamer.com/10-years-later-we-can-finally-run-crysis/

 

That's.... terrible. 10 freaking years? Look at the 10 years previous to that. Half life, Baldur's gate (the original one). The difference in graphics between those two decades is staggering. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Nicnac said:

oh is this the weekly "have we reached the limit of moore's law" post?

I mean.... it's a good question. It's happening. We're reaching the limit of size for transistors. Maybe we'll squeak out another decade, but still, SOMETHING needs to happen if computers continue to advance. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, corrado33 said:

I dunno. I STILL run multiple LGA 775 machines and they're perfectly capable of running literally any modern program, even games, and it does it well. (That said, I do have an extreme processor installed in them.) 

 

I don't think PC improvements have been that large in the last decade. 

They weren't. The 90s were basically the only time exponential improvements happened.

 

Also funny thing, I don't think stagnation is exactly a new thing either. The early home computers basically used the same 6502 or Z80 for almost 10 years. That's stagnation.

 

2 minutes ago, corrado33 said:

I mean hell, we STILL can barely run crysis at max settings with a single GPU system at 4k.

 

https://www.pcgamer.com/10-years-later-we-can-finally-run-crysis/

 

That's.... terrible. 10 freaking years? Look at the 10 years previous to that. Half life, Baldur's gate (the original one). The difference in graphics between those two decades is staggering. 

Because the developers were betting on a future where Intel's Tejas was a thing and 10 GHz CPUs would be possible. The game, when compared to modern software, is horribly designed. It was also using early implementations of graphical features which are more in the "prove it works" phase than the "make it perform good" phase.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/4/2018 at 12:34 PM, wkdpaul said:

That's mostly because back then it was all single core CPUs, though there were 2 CPU solutions, they weren't very well supported. Now, because everything is multicore capable, the "advances" have shifted to other technologies : it all comes from what I would call "consumables" : RAM, storage, GPUs.

 

Not so long ago (2 years, maybe less) you could still game on a Core2Quad, as long as you had 8GB or RAM and a nice recent GPU (GTX 7 or 9 series, or an an AMD R7 or R9). But if you had one of those Core2Quad in their original form, that would've been painful to say the least!

 

Imagine having a Q6600 with 4GB of slow DDR3, a 320GB HDD and a 512MG GPU, all on Windows XP ... YET, the same Q6600 with an SSD, 8GB of RAM, an R7 270 2GB on Windows 7 wasn't that bad (that's what I gave my brother a few years ago and he loved that budget gaming PC!).

 

Now it's all about 2nd gen cores, and I'm guessing that in a few years, those won't look so good anymore and we're going to look at the 3rd and 4th gen cores as the used budget solutions, give it enough times. ;)

I still use a core 2 Q6600 OC to 3.0Ghz with an SSD and 8 gigs of RAM with Radeon HD6750 and I play destiny 2 with almost zero lag on large scale fights. ;) when I built a computer it lasts.. I'm saving up for a new ryzen build, but after the leak of the new ryzen cpus I have to wait and see if its true before I buy anything now..

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, DavidKalinowski said:

I still use a core 2 Q6600 OC to 3.0Ghz with an SSD and 8 gigs of RAM with Radeon HD6750 and I play destiny 2 with almost zero lag on large scale fights. ;) when I built a computer it lasts.. I'm saving up for a new ryzen build, but after the leak of the new ryzen cpus I have to wait and see if its true before I buy anything now..

Being nitpicky here, but it may not be that it lasts long so much that you don't have a demanding set of requirements. I'm pretty sure that rig isn't playing the game at 1440p, 100 FPS, and maximum details.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, M.Yurizaki said:

Being nitpicky here, but it may not be that it lasts long so much that you don't have a demanding set of requirements. I'm pretty sure that rig isn't playing the game at 1440p, 100 FPS, and maximum details.

of course not lol it does play at 1920x1200 though.I have never turned on the FPS counter though so I cant comment on the actual FPS but its got to be at least 30 or so.  everything else is whatever the game auto suggested. But i has lasted a long time, I buit this computer before my son was born, hes 10 now lol. I've replaced the HDD with an SSD and swapped out the graphics card once. (I over clocked my old GTX 280 and fried it.) i''m half tempted to go ahead and buy the graphics card I was going to get for my new build and see if it can push the game to at least all high settings.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, DavidKalinowski said:

 I've replaced the HDD with an SSD and swapped out the graphics card once. 

That may be the one truly revolutionary change in personal computers in the last decade.   In the 80's I had a PC that had DOS and a gui called DeskMate in ROM.  Between that computer and my first surface pro with an SSD most of my computing life was spent waiting on the disk.  
 

The next big change would be when we have multi gig broadband everywhere.  When accessing the internet compares more favorably with accessing your local storage that will make a huge difference.  (Hold this thought.) 

 

On ‎12‎/‎5‎/‎2018 at 1:03 PM, Nicnac said:

oh is this the weekly "have we reached the limit of moore's law" post?

Sort of... not really.  Moores law is about being able to pack more transistors on the silicon. 

What we have observed is that the circuits laid out in the silicon now are fundamentally the same as they were a decade ago.   Smaller YES, more power efficient YES, a bit more powerful year on year YES.   Merely refining what we have a little bit rather than innovating something daring and new. (Leaving aside things like RISC V  and GPU's where there still is some risk taking to introduce new circuits that simply didn't exist before the latest generation).

The last thing I will say about this (at least for a while) is, there seem to be good arguments on both sides.  To a tech enthusiast every new increment of performance feels revolutionary.  To most of the rest of us a laptop from 2018 and one from 2008 (especially if you were to put an SSD in it) are both about as useful. 

Link to comment
Share on other sites

Link to post
Share on other sites

The problem with desktop computers, and likely extending to servers and other such computers, is we've had decades to figure out what just works. One could say going in this decade, practically all of the "obvious" innovations that won't break existing systems have been pretty much hammered out (and now we're finding out some of them aren't exactly free lunches). There are still plenty of unused methods to try for general purpose computing, but it's been argued much of the design of modern processors was built around how C works. With the way we've built our computer empire, the world will have to go kicking and screaming to use anything different.

Link to comment
Share on other sites

Link to post
Share on other sites

Needed to add some things for clarification.

 

Back during the old days (when 4 MB of RAM cost $200), we were dealing indeed with moore's law with little respite.  Parts were expensive as hell but indeed fast for their time, to be obsolete and too slow a few years later.  And RAM was also a big problem (not just hard drive space).  

 

But then something happened.

During the time that MMX was announced and Intel was releasing processor bins significantly faster than the lesser bin (yet extremely more expensive), apparently there was too much demand for the cheaper parts, so a legendary S-spec came out.  The Pentium 166 MMX, but not just any 166 MMX, but a certain S-spec.  Someone found that these were simply downclocked 233 MMX's and I don't remember if this was posted on USENET or even what forums existed at the time (overclockers.com?), but 100% of these processors would hit 233 mhz without failures (similar to 2600K's hitting 4.5 ghz), although I do not remember if there was a vcore jumper involved also.  you just had to use the 38 mhz (was this 75 mhz fsb doubled?  Or did that exist yet?) bus jumper.  IDE was connected to this bus and most hard drives had no issues at 38 mhz (or 75 if that was doubled).  In fact most of these processors would run at *262.5 mhz* also, with the 41.5 mhz (83 mhz??) bus jumper.  Giving you >223 mmx speeds for much much cheaper.  So it did seem that intel was releasing downclocked (with a lower multiplier) 223 mmx chips to fulfill market demand.

 

this was the beginning of the legendary 50% overclocks and the first processor capable of it.

 

The problem was even though (especially for MS DOS games, arcade and console (SNES) emulators) etc where these speed overclocks were VERY noticeable, your fun was short lived since Intel was quickly making processors that would be faster than your old one's highest overclock.  (example, pentium 2 300 mhz).  And then a killer game would come to bring your CPU to its knees and force you to upgrade (aka the "hardware genie"), if not the video card (never mind RAM and HD prices).  Ignoring dogs like Ultima 7 (from 486 days) and Strike Commander (which required a Pentium and still ran like a dog), that 166 MMX killer was the great game "Unreal".

 

But you still had the 50% overclocks.  The next "166 mmx" was the "legendary" Celeron 300a @ 450.  This was more legendary because all of them could reach this speed afaik, while the 166 MMX didn't have this status because it relied on finding an EXACT S-spec (which meant buying in person or hoping a retailer told it to you over phone).  More 50% overclocks and more insane speed boosts.  And more shortlived fun as new products came out that were even faster and new games bought the last gen to its knees.

Now you had Quake and Unreal Tournament and games I can't remember (Drakan: order of the flame?) doing the job.  (ignoring the video card wars and 3dfx again).  But sure enough another 50% legendary overclock came out--the coppermine 600e @ 900.  And that got eclipsed soon by the aborted Pentium 3 1 ghz (then re-released), and then the socket 370 change and the P3 1.4S Tualatin and then AMD jumped into the game too.  Nasty times. 


Then came netburst.  But what about those old Pentium 3's?  Well, more killer games destroyed them forcing people to upgrade.  And those were Medal of Honor: Allied Assault, and the great one--Battlefield 1942, which had people scrambling for Pentium 4's.  And Ati reached its glory days with the Nvidia killing 9700 pro and 9800 pro.

 

Then a change happened which signaled the beginning of the end of Moore's Law.

The double IPC increase of Conroe (Core) and the drop in clockspeed.

With a core 2 X6800 being twice as fast as the Pentium4 3.4C run at the same clockspeed.  But now the 50% overclocks were gone.

 

Until Sandy Bridge.  Since the only massive IPC boost was going from netburst to Core, you no longer had the huge gains anymore.  Just more cores and hyperthreading.  But the 2600K brought back the large overclocks of years past (even if many processors couldn't hit 50% and even if a large number of users saw degradation after years of 1.4v+ @ 5 ghz and had to back down a bit).  But now you no longer had the mhz (matching transistor count loosely) doubling every 2 years making the old chips obsolete as new games were coded for them, nor did you have the massive IPC boosts either.  So the end result was those 2600K's lasting longer than any computers in history, except the old 8-bit systems which saw long life (like the Apple 2's and Commodore 64's), and just needing video card, storage and RAM upgrades to remain viable.  So that's what you have now. Just more cores and small IPC bumps which eventually add up.  But note that we were stuck on 4 cores and 8 threads on consumer chips for basically 7 years (if you want to include the X platform with hyperthreading without the 6 core parts).  But suddenly in the space of two years, Intel suddenly jumped to 6 cores in 2017 for consumer chips because AMD blindsided them with Zen, then 8 cores in 2018, something they've never done before, while moving their Xeon line to HEDT on the moar corez front to fight Threadripper.  And that's where we are today.

 

While I won't call this AMD's thunderbird/Athlon 2.0 moment, they are definitely pulling a 9700 Pro on Intel here..

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

What is stagnating things

 

Fast enough

No competition for Nvidia.  See what happened with Intel once AMD became competitive again?  We got more progress in 2 years than the last 10.

Insistence that things be smaller and more power efficient (ie companies cheaping out on components).

People actually preferring smaller slower underpowered machines (phones, tablets, etc).

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/5/2018 at 2:04 PM, corrado33 said:

SOMETHING needs to happen if computers continue to advance. 

I still think that after that, chips will start to get larger, like we see on server grade platforms.

 

Once we reach the end of Moore's Law, then we'd have to move to quantum tech or something like that, so unless companies like AMD and Intel are completely shelling out for R&D on that, it might be a while before quantum tech hits the consumer market.

Quote or tag me( @Crunchy Dragon) if you want me to see your reply

If a post solved your problem/answered your question, please consider marking it as "solved"

Community Standards // Join Floatplane!

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Crunchy Dragon said:

I still think that after that, chips will start to get larger, like we see on server grade platforms.

 

Once we reach the end of Moore's Law, then we'd have to move to quantum tech or something like that, so unless companies like AMD and Intel are completely shelling out for R&D on that, it might be a while before quantum tech hits the consumer market.

Question: Why don't we just make larger chips now? I know inter-chip communication may be slower but it'd still be a crap ton faster than any other communication on the board. Would this mean slower chip frequencies? 

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/5/2018 at 1:48 AM, Uttamattamakin said:

Well... the fact you can seriously say that is proof of the stagnation. 

Compare buying a ... typical PC you could buy as a consumer in 1988 would likely have had an 8086 at 8MhZ or a 286 at 10 if you were a real baller.  The graphics would be 640x480 with four colors.  IF you bought a PCJr or a Tandy 1000 series 640x480 with 16 colors.  Fast forward just a couple of years and you'd have a 386 at 25Mhz, 2MB of ram, and a 40MB HDD able to run windows 3.1 and get on the early world wide web and just barely use a CD rom.    The graphics would be 800x600 and 256 colors. Two more years and it would be 1024x786 and 16 million colors.   etc.  

What I am saying is.  It used to be that an upgrade to a new computer meant being able to do things that were previously impossible.

 

Compare the situation now.  My moms Mac Book from 2008 can do everything my 2017 HP Specter X360 can do.  It just does it SLOWLY (but still fast enough to use).    Compared to how big of a leap an upgrade used to be it is not really worth it to buy a new PC until the thing breaks.

Have to agree that this is how I feel today i.e. not worth it to upgrade for small performance improvement.  That's probably the reason I stuck with my i7-3770 because for use case not so AAA games and web browsing. I can basically use my rig until it breaks.

CPU: Sempron 2500+ / P4 2.8E / P4 2.6C / A64 x2 4000+ / E6420 / E8500 / i5-3470 / i7-3770
GPU: TNT2 M64 / Radeon 9000 / MX 440-SE / 7300GT / Radeon 4670 / GTS 250 / Radeon 7950 / 660 Ti

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, corrado33 said:

Question: Why don't we just make larger chips now? I know inter-chip communication may be slower but it'd still be a crap ton faster than any other communication on the board. Would this mean slower chip frequencies? 

There's no real need for larger chips other than the insanely high core count chips that actually need the space.

Quote or tag me( @Crunchy Dragon) if you want me to see your reply

If a post solved your problem/answered your question, please consider marking it as "solved"

Community Standards // Join Floatplane!

Link to comment
Share on other sites

Link to post
Share on other sites

The upcoming chiplets are tiny (smaller than a fingernail) and 8/16 cores are currently way more than most people have.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/4/2018 at 2:47 PM, M.Yurizaki said:

I would argue a lot of the apparent stagnation is simply because what we as consumers do with computers don't require a lot of power to reach acceptable levels of performance and quality. Especially when a lot of things moved to the web and we have phones to access the content, developers have to design things with even lower performance requirements in mind. Since we have apps and software that are meant to perform well on lower power devices, it makes sense that putting them on higher end hardware won't improve the apparent performance.

Ding ding ding!

- Obviously there is always gonna be exceptions, but yeah in terms of PCs a similar thing has happened with storage. Storage requirements have ether flat lined or gone down.

Video downloading is not nearly as common, and streaming has taken over. Music similar story, but even then those files haven't been getting larger without the user going to lossless formats. Image files have long since been negligible, and even then are often just put on social media then deleted.
The only exception is Videogames.

Im still kinda surprised the 250GB SSD I gave to my mom a while back hasn't been been filled up. I was honestly expecting to have to hook up a second drive for storage by now.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, corrado33 said:

Question: Why don't we just make larger chips now? I know inter-chip communication may be slower but it'd still be a crap ton faster than any other communication on the board. Would this mean slower chip frequencies? 

Larger chips means less yield.

 

13 hours ago, ewitte said:

No competition for Nvidia.

And yet NVIDIA has made lots of progress without AMD pressuring them (at least on the high-end). The only major gaffe I'm seeing from them is their current pricing model.

 

13 hours ago, ewitte said:

See what happened with Intel once AMD became competitive again?  We got more progress in 2 years than the last 10.

I'll give you that one.

13 hours ago, ewitte said:

Insistence that things be smaller and more power efficient (ie companies cheaping out on components).

I don't see how either of those necessarily means companies cheaping out on components. It's actually more expensive to do both.

13 hours ago, ewitte said:

People actually preferring smaller slower underpowered machines (phones, tablets, etc).

Because they're more convenient and really most people don't need anything more powerful than an i5 level of processor if all they're going to do is watch YouTube and goof around on SNS sites all day.

Link to comment
Share on other sites

Link to post
Share on other sites

Less and smaller components are cheaper from a *resource* expenditure perspective you just have more people with their hands in the pot and people paying more for convenience.

 

Nvidia made progress not for us but to be successful selling to business.  The entire GPU market it's pretty much hand me downs for the consumers.  We get scraps.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

Stagnant? Not really. What was done 10 years ago on a cluster full of Xeons costing companies millions and days of computing can be now done in hours on a few GPUs... 

Where we can go from that? Be more efficient in terms of power efficiency. Today we can have a computer on a overgrown USB stick! Try that with Pentium IV. The day of "MOOAR PERFORMANCE" are gone, unless you are doing some heavy lifting.

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, PeterBocan said:

Stagnant? Not really. What was done 10 years ago on a cluster full of Xeons costing companies millions and days of computing can be now done in hours on a few GPUs... 

Where we can go from that? Be more efficient in terms of power efficiency. Today we can have a computer on a overgrown USB stick! Try that with Pentium IV. The day of "MOOAR PERFORMANCE" are gone, unless you are doing some heavy lifting.

Are either of those PC's though?   In high performance computing GPU's have revolutionized parallel processing.   With Nvidia CUDA installed and Linux running my laptop using Matlab or mathematica is more powerful than a workstation from 10 years ago.    That is with a very specific science workload where a small percentage gain can mean a huge time savings.

 

For the average person, GAMING, running office even on a moderate size spreadsheet, browsing with multiple tabs  while streaming a video.

I'd wager that even a first gen AMD Athlon 64 could do all of the above but VERY SLOWLY.    I mean they did those things with that chip.  Memba this? In 2001 those comps were doing most of what we do now... and the main reason they are obsolete is that most were 32 bit and all OS's are now 64 bit.

 

I mean heck... one guy even put Windows 7 and 8  (10 wouldn't work but Ubuntu does) on a TC1100.  It looks like it works just fine.  No AAA GAMING!!!  (fist pump) ...but it works.   It seems usable.  There are similar videos about old Mac's still working and being usable.  Especially with a SSD. 

 This brings me to the way in which computers HAVE improved by more than a few %.

They have gotten lighter, thinner and more energy efficient.  

 

That seems to be where all the innovation was in the personal computer space.   We all wanted a padd that was like something Jen Luc Picard would read from on the bridge while sipping a cup of Earl Grey. We wanted something that was thin, light, and networked to some more beefy computer somewhere out there. 

Question, for the room:  How much lighter and thinner can we make PC's before it is not practical?   Do we really want to carry around a $2000 piece of semi transparent glass that will shatter when dropped or tear if bent too much?

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, M.Yurizaki said:

What scraps?

Cut down defective enterprise cards, cheaper older designs, etc.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

Well how many normal consumers would recognize the difference between a nvme and sata SSD vs ANY SSD and a HDD?  Even budget enthusiasts mix and match all three types for different storage types.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×