Jump to content

Is there a finite limit to PC speed?

threadysparrow

Is there a point when SSDs and CPUs become so fast that people will not be able to notice the difference between generations? If so, what will they do then?

 

So far, one of the goals of many companies in the tech world have been to make things faster. Well, what happens when fast becomes so fast that it isn't noticeable by our human perceptions?

I get 60 frames at 1080p on a dual core APU. Ask me how.

AMD FX 8350 CPU / R9 280X GPU / Asus M5A97 LE R 2.0 motherboard / 8GB Kingston HyperX Blue 1600 RAM / 128G OCZ Vertex 4 SSD / 256G Crucial SSD / 2T WD Black HDD / 1T Seagate Barracude HDD / Antec Earthwatts 650W PSU / Coolermaster HAF 922 Case

Link to comment
Share on other sites

Link to post
Share on other sites

We'll soon get to that point, where it won't be very cost efficient to make it faster because the average user can't notice a difference. However, they're always be people who think something isn't fast enough and therefore I doubt it will stop anytime soon.

Link to comment
Share on other sites

Link to post
Share on other sites

If you beLIEEEEEEEEEEEEEEVE.

I'll just leave this here just to reassure you.

 

nuntuh.gif

Don't copy M-ursu...

 

IM TELLING

 

@M-ursu

Link to comment
Share on other sites

Link to post
Share on other sites

Probably never. There's always improvements to everything (graphics, AI pathing, other resources intensive things). So as hardware improves, resource intensiveness increases to match the improvements of hardware. It's like running Win XP on today's computers. Win 7 is an awesome OS, but Win XP takes the lead in terms of speed. Win 7 increased how much resources it uses because of increase in hardware performance.

“The value of a college education is not the learning of many facts but the training of the mind to think”

 

Link to comment
Share on other sites

Link to post
Share on other sites

according to Moore's law, no.

(that is if the pcb is expanded to compensate for more transistors)

 

This is my Lightsaber.          {[=]////]"[¬'/\Y/#####################################
This is my other Lightsaber. (T!!!!!!!T=:"|[\#####################################  #killedmywife 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You mean when we all have quantum home pcs and can render 4h 12k material in 1 sek? Than i guess there isn*t really a point in getting faster :D

Link to comment
Share on other sites

Link to post
Share on other sites

Should have said ''Is PC speed finite'' since finite means that there is a limit just like infinite means that there is no limit

 

To answer your question I'd say yes, eventually but we are quite some way of that yet.

DESKTOP - Motherboard - Gigabyte GA-Z77X-D3H Processor - Intel Core i5-2500K @ Stock 1.135v Cooling - Cooler Master Hyper TX3 RAM - Kingston Hyper-X Fury White 4x4GB DDR3-1866 Graphics Card - MSI GeForce GTX 780 Lightning PSU - Seasonic M12II EVO Edition 850w  HDD -  WD Caviar  Blue 500GB (Boot Drive)  /  WD Scorpio Black 750GB (Games Storage) / WD Green 2TB (Main Storage) Case - Cooler Master 335U Elite OS - Microsoft Windows 7 Ultimate

Link to comment
Share on other sites

Link to post
Share on other sites

no, and i will tell you why. there will be a point when PCs get so faster there is no noticeable difference on each display method.(e.g monitor)here will always be new ways to consume media (e.g oculus rift) which will demand more power and speed from the system.

CPU: i5 3570K @4.5GHz    GPU: R9 290   MOBO: ASUS p8z77-v  RAM: 8Gb corsair vengence   CASE: ARC MIDI  PSU: XFX pro 550W  HDD: 2tb segate baracuda

Link to comment
Share on other sites

Link to post
Share on other sites

speed at some point will be more a race of how many tasks it can preform, not the rate of speed.. Programs will grow in complexity with the hardware! 

While they're shooting pistols, I'm dropping nuclear bombs

Link to comment
Share on other sites

Link to post
Share on other sites

Once the manufacturing process gets small enough (10 nano-meters i think?) that's it for silicon I believe. Correct me if I'm wrong, Physicist's of LTT. 

Link to comment
Share on other sites

Link to post
Share on other sites

according to Moore's law, no.

(that is if the pcb is expanded to compensate for more transistors)

 

Moore's law can't last forever with silicon.

Link to comment
Share on other sites

Link to post
Share on other sites

Is there a point when SSDs and CPUs become so fast that people will not be able to notice the difference between generations? If so, what will they do then?

 

So far, one of the goals of many companies in the tech world have been to make things faster. Well, what happens when fast becomes so fast that it isn't noticeable by our human perceptions?

Well, yeah. 

I mean, to be perfectly honest, we are already kinda there. Slick uses an i7 2600k with Hyper Threading and has no interest in upgrading because there is no performance difference for his use case. If he were Diezel (Edzel, wtv), he would notice a difference in performance because his use case is different (video stuff). 

Just the same, the difference between SSD's does not mean jack for most people. How often are they gonna move 1GB+ size files around to notice the speeds? Everything will pretty much be "instant".

I believe it will become as it always has been, binary. Either there will be noticeable performance differences, or there will not and the line will be stark. 

What that means is that there will slowly no longer be a "grey area" of performance where you are not sure if you could upgrade and see a difference. Look at RAM speeds. It does not matter so long as you have 1600 (or 1333) or higher. Same thing with capacity.You either have enough, or you do not.

That is the future of performance in regards to GPUs. Because once you get the point where you "have enough", the differences between one product and another that both have "enough" performance is basically negligible. That is not true for GPU's right now, but it probably will be one day, just like it is for CPUs, RAM, SSD's, and the like.

That is how I see it anyway. 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Remember, Bill Gates once said that he can't imagine why would an average user have more that 4 kB of RAM. 

PC performance increase won't stop. There is a physical limit to transistor size, but by the time we hit that limit, other technologies will be avaliable. 

 

Tasks for PCs will be also more complex, so performance will have to increase too. Imagine that everyone will want an AI for their PC... for that you'd need way more computing power. 

 

Spoiler

CPU:Intel Xeon X5660 @ 4.2 GHz RAM:6x2 GB 1600MHz DDR3 MB:Asus P6T Deluxe GPU:Asus GTX 660 TI OC Cooler:Akasa Nero 3


SSD:OCZ Vertex 3 120 GB HDD:2x640 GB WD Black Fans:2xCorsair AF 120 PSU:Seasonic 450 W 80+ Case:Thermaltake Xaser VI MX OS:Windows 10
Speakers:Altec Lansing MX5021 Keyboard:Razer Blackwidow 2013 Mouse:Logitech MX Master Monitor:Dell U2412M Headphones: Logitech G430

Big thanks to Damikiller37 for making me an awesome Intel 4004 out of trixels!

Link to comment
Share on other sites

Link to post
Share on other sites

The day will happen when pixels are a atom wide, we clocked the fastest material and the transistors are also a couple atoms big and produce little heat.

DEM PPIs

Link to comment
Share on other sites

Link to post
Share on other sites

I had a talk with some electronics guy about this subject. It was a while ago, and I was a bit drunk, so I can only go on what I seem to remember: For computers as we know them today, they will eventually be limited by the speed of light. However, for larger and more complex computations, longer pipelines (and whatnot) can be introduced, making them do more pr clock. This will probably only have an adverse effect in normal everyday usage. As for complex tasks, quantum seems to be a thing too

 

Edit: As I said. Fuzzy memory.

 

according to Moore's law, no.

(that is if the pcb is expanded to compensate for more transistors)

Bigger dies comes with their own problems too. Longer travel distance introduces latencies.

Link to comment
Share on other sites

Link to post
Share on other sites

according to Moore's law, no.

(that is if the pcb is expanded to compensate for more transistors)

 

For size, Moore's law won't go on forever, because we're limited to the size of an atom.

 

Yes, that is very small, but it means that Moore's law does end. We have a size limit.

Link to comment
Share on other sites

Link to post
Share on other sites

Our PCs are actually really slow now. We will see big jump once we get away from silicon.

Link to comment
Share on other sites

Link to post
Share on other sites

Remember, Bill Gates once said that he can't imagine why would an average user have more that 4 kB of RAM. 

PC performance increase won't stop. There is a physical limit to transistor size, but by the time we hit that limit, other technologies will be avaliable. 

 

Tasks for PCs will be also more complex, so performance will have to increase too. Imagine that everyone will want an AI for their PC... for that you'd need way more computing power. 

 

This pretty much. I fully expect us to have Petabyte RAM capacities (or at the VERY least, storage drives) by the time I die (assuming 70ish). Just like I can see us having 1TB of RAM within the next 10-20 years. 

they have found something that could replace sillicon

Graphene is awesome. 

 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

This pretty much. I fully expect us to have Petabyte RAM capacities by the time I die (assuming 70ish). Just like I can see us having 1TB of RAM within the next 10-20 years. 

Graphene is awesome. 

 

RAMdisks all day, everyday

Link to comment
Share on other sites

Link to post
Share on other sites

This pretty much. I fully expect us to have Petabyte RAM capacities by the time I die (assuming 70ish). Just like I can see us having 1TB of RAM within the next 10-20 years. 

Graphene is awesome. 

 

That's the name of it, I couldn't remember.

Link to comment
Share on other sites

Link to post
Share on other sites

Sure, you would reach a point where going faster is no longer noticeable if the rest of the system stayed the same. However everything is constantly getting more demanding so in my opinion you will always be able to notice the difference assuming you continue to use it for more and more demanding things.

CPU: AMD FX-8350 | CPU Cooler: H80i | Motherboard: Asus M5A99FX PRO R2.0 | RAM: 8GB Kingston Beast 1866MHz


Case: Define R4 | GPU: Gigabyte GTX 780ti | PSU: Corsair CX600M | SSD: 250GB Samsung 840 EVO


.... and a Partridge in a pear tree! 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×