Jump to content

Nvidia CEO - Moore's Law is dead; GPUs to replace CPUs

kagarium

Nvidia CEO Jensen Huang said at the GTC confrence in Beijing that GPUs will replace CPUs.

http://www.pcgamer.com/nvidia-ceo-says-moores-law-is-dead-and-gpus-will-replace-cpus/

Quote

Huang says Moore's Law is dead is because it can't keep pace with advancements in GPU design. Huang talked about GPUs growing in computational capacity over the years, and how they're the better suited for advancements in artificial intelligence.

Intel denies that Moore's Law is dying, and continues to back it up.

Quote

"In my 34 years in the semiconductor industry, I have witnessed the advertised death of Moore’s Law no less than four times. As we progress from 14 nanometer technology to 10 nanometer and plan for 7 nanometer and 5 nanometer and even beyond, our plans are proof that Moore’s Law is alive and well," Krzanich stated in a blog post outlining Intel's plans. "Intel’s industry leadership of Moore’s Law remains intact, and you will see continued investment in capacity and R&D to ensure so."

It's an interesting thing to consider, GPUs replacing CPUs. We see it happening a lot already in modern gaming consoles.

 

What's everyone else think? Are CPUs failing?

 

-Ken

Currently majoring in computer networking. That's about it yeah. Not much goes on in my life.

Be sure to quote me if you want me to reply!

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, kagarium said:

 

 

What's everyone else think? Are CPUs failing?

 

 

up until no one gives a shit about task parallelization, yes. But in reality, you can't stack up as many gpu cores as cpu cores without needing a nuclear power plant, so we are probably hitting the limits cpu wise.

Link to comment
Share on other sites

Link to post
Share on other sites

Even if the technology comes around to consumers I'm not sure what people would do with a couple thousand cores processing their daily tasks. Even then there's the issue that all of the software would have to be redesigned in such a way that it can leverage parallel workloads over faster lesser cores. PC video games as a whole are barely breaking the 4 core limit. What are they going to do with thousands?

Link to comment
Share on other sites

Link to post
Share on other sites

Modern GPUs cannot effectively replace 10 year old CPUs for sequential tasks where IPC + clock > core count.

Otherwise we'd all be running 16 core Atom chips instead of quad and dual cores.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

Did we come full circle? This sounds very similar to what ATI yelled 10 years ago. CPU and GPU would work together at such a close level you wouldn't notice what runs on what.

 

Nvidia, run Linux on a nvidia GPU ONLY first, then we'll talk again.

If you want my attention, quote meh! D: or just stick an @samcool55 in your post :3

Spying on everyone to fight against terrorism is like shooting a mosquito with a cannon

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, GPUs are taking over several sections of computation that they are suited for more than CPUs.

 

No CPUs arent dying because there are many types of software that lots of people use that either cant be parallelized or would have little impact if it was. Which means that the large number of compute cores on GPUs would be sitting idle most times.

 

The answer is that both are required, and for some workloads GPUs are now more of a factor than they were before.

 

 

(I wonder if anything actually has died when some stupid article came out saying "XXXXXXXX IS DEAD")

Primary:

Intel i5 4670K (3.8 GHz) | ASRock Extreme 4 Z87 | 16GB Crucial Ballistix Tactical LP 2x8GB | Gigabyte GTX980ti | Mushkin Enhanced Chronos 240GB | Corsair RM 850W | Nanoxia Deep Silence 1| Ducky Shine 3 | Corsair m95 | 2x Monoprice 1440p IPS Displays | Altec Lansing VS2321 | Sennheiser HD558 | Antlion ModMic

HTPC:

Intel NUC i5 D54250WYK | 4GB Kingston 1600MHz DDR3L | 256GB Crucial M4 mSATA SSD | Logitech K400

NAS:

Thecus n4800 | WD White Label 8tb x4 in raid 5

Phones:

Oneplux 6t (Mint), Nexus 5x 8.1.0 (wifi only), Nexus 4 (wifi only)

Link to comment
Share on other sites

Link to post
Share on other sites

GPU's cannot replace CPU's as they stand today. Just about every application is coded to run efficiently in series and on X86, and well as Windows, Mac and Linux. 

Now, there is the argument to be made that the computational capacity of GPU's is many times that of even the most powerful CPU's, and growing at a far faster pace. 

In order for there to be a world in which GPU's push out CPU's, it wouldn't just require an entirely new OS, programming language and hardware platform... It would require a complete transformation of computer code theory from machine language all the way up. The very way we think about programming would have to be erased and rewritten.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Spenser1337 said:

Tesla CEO says engines are dying

 

almost as if they believe in the product they produce 

?

Link to comment
Share on other sites

Link to post
Share on other sites

Then it would literally not be called a GPU anymore, it would be a CPU.

The names actually mean something, they aren't just random letters put together...

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

Nobody can decide if Moore's Law is dead.  It is.  End of story.  I explained it on facebook but I don't want to type it out again.

Capture.PNG.227e0d4aa47ec25bb08ebdad72aac1f0.PNG

The conversation goes:

Me

Someone else

Me

Someone Else

Me responding to a name I forgot to block.

Make sure to quote or tag me (@JoostinOnline) or I won't see your response!

PSU Tier List  |  The Real Reason Delidding Improves Temperatures"2K" does not mean 2560×1440 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, NowThatsDamp said:

?

The CEO of a company is supposed to push their own product, so claiming that your product will supplant another and effectively take over the market isn't surprising. A CEO like Elon Musk is supposed to come out and say electric cars are the future and that they have the product to achieve it. Nvidia is now claiming they can replace CPUs entirely; a bold statement that seems unlikely to happen.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Trixanity said:

The CEO of a company is supposed to push their own product, so claiming that your product will supplant another and effectively take over the market isn't surprising. A CEO like Elon Musk is supposed to come out and say electric cars are the future and that they have the product to achieve it. Nvidia is now claiming they can replace CPUs entirely; a bold statement that seems unlikely to happen.

Thank you 

Link to comment
Share on other sites

Link to post
Share on other sites

I doubt this greatly. In applications which simply require number crunching such as mining or machine learning GPUs have outperformed CPUs by orders of magnitude. However most applications a user uses are single threaded, not only is it harder to program concurrently but some algorithms cannot be parallelized effectively.

 

There was some research on running the A* pathfinding algorithm on GPUs with the result that it either is more memory hungry than chrome or slower than on a CPU.

http://publications.lib.chalmers.se/records/fulltext/129175.pdf

 

Only in a data center will you commonly find algorithms which will scale to thousands of cores , and those mainly do this as they are running the same algorithm thousands of times separately. All algorithms will have to obey Amdal's Law of diminishing returns with parallel workloads.

Link to comment
Share on other sites

Link to post
Share on other sites

I agree. I think x86 is moving fast towards its end. It's simply not a very efficient architecture anymore. Even Intel is starting to remove obsolete parts of the x standard (could be x87 or just legacy stuff, I don't remember). It's simply more efficient to just emulate the code.

 

Problem is really that the only way forward now, are more cores and faster clock speeds. Former is only useful with a lot of multitasking and with software being heavily multithreaded, which is difficult. The latter is very limited by current chip production methods. After all, there's a reason GPU's are taking over a lot of compute work from CPU's these days.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Notional said:

I agree. I think x86 is moving fast towards its end. It's simply not a very efficient architecture anymore. Even Intel is starting to remove obsolete parts of the x standard (could be x87 or just legacy stuff, I don't remember). It's simply more efficient to just emulate the code.

 

Problem is really that the only way forward now, are more cores and faster clock speeds. Former is only useful with a lot of multitasking and with software being heavily multithreaded, which is difficult. The latter is very limited by current chip production methods. After all, there's a reason GPU's are taking over a lot of compute work from CPU's these days.

I agree that we probably need a radical re-design of modern CPUs and that we need to dump a lot, if not all, of the legacy baggage. Only that way will we achieve better performance (excluding changing materials and just brute-forcing the problem through massive clock speed gains) but it would take an even bigger effort to dump CPUs entirely. Everything would have to be changed top-to-bottom to make GPUs the new main processor. If we re-designed our CPUs to be leaner and faster, we'd only need developers to get their heads out of their asses and utilize the power presented to them and to update their applications to support modern instructions and techniques.

 

It seems to be me that Intel (despite what critics would say) are touching the ceiling of what can be achieved without doing something differently. They may not have hit the ceiling yet but they will soon enough. I do believe (to re-iterate) that processors need to be built differently than they do now to avoid that ceiling. You can only tweak and improve a design so much. 

Link to comment
Share on other sites

Link to post
Share on other sites

What this comes down to is how much can a task be divided up for parallel processing... GPUs will not replace CPUs as some tasks do not scale to splitting up the workload, while other tasks do perform better when you can divide it up.  As well, there may come a time when you have a system that offloads a majority of the tasks to something peripheral, you will still have something that acts as the 'Central' unit that coordinates all of these tasks and acts as a conductor of sorts to determine which GPU gets which tasks in which order.

Link to comment
Share on other sites

Link to post
Share on other sites

Intel would soon be like what the nokia company is to mobile industry now.

Details separate people.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, WMGroomAK said:

What this comes down to is how much can a task be divided up for parallel processing... GPUs will not replace CPUs as some tasks do not scale to splitting up the workload, while other tasks do perform better when you can divide it up.  As well, there may come a time when you have a system that offloads a majority of the tasks to something peripheral, you will still have something that acts as the 'Central' unit that coordinates all of these tasks and acts as a conductor of sorts to determine which GPU gets which tasks in which order.

To elaborate further on what you said. You can't force a serial logical computation to be parallelized. The only way to complete the workload quicker is to process the computation faster and more efficiently, this is why Intel has been pushing more efficient instructions.

 

 Until they can find something other than binary computation, it will not happen outside of very specific cases. There is a reason we moved from parallel I/O to serial. There is a balance to be made.

Link to comment
Share on other sites

Link to post
Share on other sites

GPUs work exceptionally well with massively-parallelized tasks, but I doubt they'll be completely replacing CPUs anytime soon. They work best in conjunction with CPUs.

 

Then again why is anybody surprised that a GPU company CEO says GPUs will kill CPUs?

Corsair 600T | Intel Core i7-4770K @ 4.5GHz | Samsung SSD Evo 970 1TB | MS Windows 10 | Samsung CF791 34" | 16GB 1600 MHz Kingston DDR3 HyperX | ASUS Formula VI | Corsair H110  Corsair AX1200i | ASUS Strix Vega 56 8GB Internet http://beta.speedtest.net/result/4365368180

Link to comment
Share on other sites

Link to post
Share on other sites

Context people context. As people points out, it is the context of mass processing that can be paralyzed. GPUs SUCKS at doing logic. An 'if condition' (if this, do that, else do this) is taxing for a GPU. Each SMs or CUDA cores, doesn't have its own set of logic circuit. That is currently impossible to add, else the GPU would be easily as big as an entire wafer of chips. Imagine the mass cooling needed, let alone production cost.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Trixanity said:

The CEO of a company is supposed to push their own product, so claiming that your product will supplant another and effectively take over the market isn't surprising. A CEO like Elon Musk is supposed to come out and say electric cars are the future and that they have the product to achieve it. Nvidia is now claiming they can replace CPUs entirely; a bold statement that seems unlikely to happen.

It wouldn't on the sole fact it relys on being subsidized by tax payers. The day Elon can make his business model work without the government paying close to half to 80% of his products from him the. I'll believe gas engines are dying.

CPU: 6700K Case: Corsair Air 740 CPU Cooler: H110i GTX Storage: 2x250gb SSD 960gb SSD PSU: Corsair 1200watt GPU: EVGA 1080ti FTW3 RAM: 16gb DDR4 

Other Stuffs: Red sleeved cables, White LED lighting 2 noctua fans on cpu cooler and Be Quiet PWM fans on case.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×