Jump to content

AMD's Hawaii Is Officially The Most Efficient GPGPU In The World To Date, Tops Green500 List

Holy balls. I stand corrected good sir.

 

@maazster it seems you are correct!

 

Well fuck... Imagine if they can pull it off? That's only 5 years away.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm pretty sure it was by 2020, so 5 years, and also the power consumption is meant to be slashed by 25X not 75%.

The article did mention "70%" in there as well. Their new arch will supposedly allow them to "Outpace" their historical energy inefficiencies by at least 70%.

 

Now, this is all PR bluster so far, but that doesn't mean they can't pull it off.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Holy balls. I stand corrected good sir.

 

@maazster it seems you are correct!

 

Well fuck... Imagine if they can pull it off? That's only 5 years away.

AMD before were all about efficiency and performance, its only in the past few years they have stagnated. But yeah, the claims are big, 5 years comes and goes.

My PC specs; Processor: Intel i5 2500K @4.6GHz, Graphics card: Sapphire AMD R9 Nano 4GB DD Overclocked @1050MHz Core and 550 MHz Memory. Hard Drives: 500GB Seagate Barracuda 7200 RPM, 2TB Western Digital Green Drive, Motherboard: Asus P8Z77-V , Power Supply: OCZ ZS series 750W 80+ Bronze certified, Case: NZXT S340, Memory: Corsair Vengance series Ram, Dual Channel kit @ 1866 Mhz, 10-11-10-30 Timings, 4x4 GB DIMMs. Cooler: CoolerMaster Seidon 240V

Link to comment
Share on other sites

Link to post
Share on other sites

The article did mention "70%" in there as well. Their new arch will supposedly allow them to "Outpace" their historical energy inefficiencies by at least 70%.

 

Now, this is all PR bluster so far, but that doesn't mean they can't pull it off.

Hmmmm.... What I gathered was they want to do better than what Moore's law says about their chip efficiency, so I think they mean 70% Moore efficient than what is predicted by Moore's law which equals 25x.

Dunno I could be wrong, also did you get what i did there?

My PC specs; Processor: Intel i5 2500K @4.6GHz, Graphics card: Sapphire AMD R9 Nano 4GB DD Overclocked @1050MHz Core and 550 MHz Memory. Hard Drives: 500GB Seagate Barracuda 7200 RPM, 2TB Western Digital Green Drive, Motherboard: Asus P8Z77-V , Power Supply: OCZ ZS series 750W 80+ Bronze certified, Case: NZXT S340, Memory: Corsair Vengance series Ram, Dual Channel kit @ 1866 Mhz, 10-11-10-30 Timings, 4x4 GB DIMMs. Cooler: CoolerMaster Seidon 240V

Link to comment
Share on other sites

Link to post
Share on other sites

Well fuck... Imagine if they can pull it off? That's only 5 years away.

 

Yeah, seems a little far fetched. I had remembered reading the same thing a while back, so when the question popped up, I was happy to look around for a minute to refresh my memory.

 

I guess since total compute is how they like measuring APU performance, these gains could be realistic considering it is the GCN portion that accounts for the majority of the total compute.

 

Obviously the x86 part of the APU is not going to see a  25x efficiency increase, but I could see a super low voltage/power APU in 5 years being capable of 25x the theoretical compute/watt ratio of say a 7850k. Since the performance/power ratio of any architecture does not scale linearly, I bet the only way this goal will be "achieved" is by comparing very low power parts in 5 years to the highest power APUs of today.

 

For example on the blue side, look at Broadwell Y or whatever it is called. I bet those 4.5w chips are a 10x efficiency increase (compute/watt) over say a 4770K when including the iGPU compute. I don't feel like crunching the numbers to give a real ratio but hopefully this gets the point across.

 

25x seems doable, but only by comparing chips bound for entirely different markets IMO.

Link to comment
Share on other sites

Link to post
Share on other sites

Are you sure you mean 25X, and not 25%? 25X would mean a reduction of 2500%, which doesn't make sense to me (Unless someone can explain the math around that? I thought about it, and now my head hurts :P lol)

It just means that their power consumption in 2020 will be the (power consumption now)/25

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

It just means that their power consumption in 2020 will be the (power consumption now)/25

Yeah I did the math using a figure based on 2500%, and got a 4% figure as the answer.

 

So what they are saying, is they are aiming to make their products as low as 4% power consumption compared to current (or, well, mid-this year anyway) products.

 

I did some random calcs, and lets say they can apply this to their GPU lineup. Lets say they have a video card that draws 250 watts (This is arbitrary, I can't be arsed to look up the actual figures for say a 290x :P lol). That would bring Power Consumption down to 10 watts!

 

Now, this is probably more geared towards APU's, where power draw is much lower already. So let's say a current APU uses 100 watts on load. That means they hope to get an equivalent APU down to 4 watts on load! Crazy.

 

I think it's very ambitious, and maybe they're aiming a little high for what is practical... but then again, maybe not? AMD has pulled magic farts out of their asses before. They could do it again! I will eagerly wait and see how this develops.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah I did the math using a figure based on 2500%, and got a 4% figure as the answer.

 

So what they are saying, is they are aiming to make their products as low as 4% power consumption compared to current (or, well, mid-this year anyway) products.

 

I did some random calcs, and lets say they can apply this to their GPU lineup. Lets say they have a video card that draws 250 watts (This is arbitrary, I can't be arsed to look up the actual figures for say a 290x :P lol). That would bring Power Consumption down to 10 watts!

 

Now, this is probably more geared towards APU's, where power draw is much lower already. So let's say a current APU uses 100 watts on load. That means they hope to get an equivalent APU down to 4 watts on load! Crazy.

 

I think it's very ambitious, and maybe they're aiming a little high for what is practical... but then again, maybe not? AMD has pulled magic farts out of their asses before. They could do it again! I will eagerly wait and see how this develops.

Im pretty sure they mean power consumption in the perf/watt viewpoint. your performance increases lets say 12x in the next 5 years (no idea how much it will) this means you will have a 790x that will use 2x less power than a 290x while being 12x more powerful. or a middle ground of 5x more powerful, using 5x less power :)

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Im pretty sure they mean power consumption in the perf/watt viewpoint. your performance increases lets say 12x in the next 5 years (no idea how much it will) this means you will have a 790x that will use 2x less power than a 290x while being 12x more powerful. or a middle ground of 5x more powerful, using 5x less power :)

Of course, in my calculations, because I'm a lazy ass, I was assuming compute performance was the same, and that power was simply going down. But it scales upwards more neatly, as you describe. Increased power without increased power draw.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Kinda on the same boat. Any kind of discussion quickly becomes a battle of egos or just devolves into general shitflinging.

my favs are

should i buy an amd fx6300 or a 5960x?

description: i already have a 5960x i just want to start a flamewar

and there is

incredible

Link to comment
Share on other sites

Link to post
Share on other sites

But the article says that the supercomputer is the most efficient, not that the GPU itself is the most efficient available. I'm going to take a wild guess that there's more than GPUs in supercomputers that add to total energy usage.

Obviously, but GPU's do take up a large margin. Consider that, in terms of actual Computer Components, the GPU takes up the most power out of any component, by a huge amount. They draw 2 or 3 times the amount as a CPU (or more, depending on what CPU). AND modern Super Computers have hundreds or thousands of GPU's.

 

Not to mention that the most demanding aspects - example: cooling/environmental control - are going to be fairly similar for any modern supercomputer, as they are all going to have similar requirements.

 

But the fact remains that, even if they were able to save power in other areas, it does not diminish the accomplishments AMD has made towards GPGPU compute and supercomputer efficiency.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Perhaps many of you are familiar with Hawaii, the venerable GPU powering the fastest graphics card in the world the R9 295X2 along with AMD's enthusiast R9 290x and R9 290 offerings.

So it may come as a shock to you to hear that this GPU is in fact the most power efficient General Purpose GPU in the world to date.

 

GSI Helmholtz Center's latest super computer the L-CSC is the first ever to break the 5 gigaflops/watt barrier. Powered by AMD's FirePro S9150 accelerator based on the Hawaii GPU, L-CSC has earned the #1 spot in the latest Green500 list of the most power efficient supercomputers in the world.

 

banner1_0.jpg

 

The previous #1 spot holder the TSUBAME-KFC powered by Intel Ivy Bridge CPUs and Nvidia K20 GPU accelerators fell into third position. Capable of 4.4 gigaflops/watt, compared to the AMD powered L-CSC which is capable of 5.27 gigaflops/watt.

 

Source

 

said David Cummings, senior director and general manager, professional graphics, AMD.

 

said Professor Dr. Volker Lindenstruth, professor at Goethe University of Frankfurt, head of IT department of GSI, and chairman of Frankfurt Institute for Advanced Studies.

Source #2

title is too long for mobile users. please keep it to about 6 words. 

CM Storm Switch Tester MOD (In-Progress) - http://linustechtips.com/main/topic/409147-cm-storm-switch-tester-macro-mod/


       Ammo Can Speaker 02 (Completed) - http://linustechtips.com/main/topic/283826-ammo-can-speakers-02/       A/B Switch V 0.5 (Completed) - http://linustechtips.com/main/topic/362417-ab-switch-v0


     Build 01 - The Life of a Prodigy -  http://linustechtips.com/main/topic/13103-build-01-the-life-of-a-prodigy/             Build 02 - Silent Server 3000 - http://linustechtips.com/main/topic/116670-build-02-silent-server-3000/

Link to comment
Share on other sites

Link to post
Share on other sites

title is too long for mobile users. please keep it to about 6 words. 

I'm not sure you could reduce this title to 6 words while keeping something with a semblance of accuracy towards what it is about.

 

"Hawaii GPU Most Efficient GPGPU" maybe? That would work, but it looks like shit, and I would rather have good looking and easily readable titles, compared to Mobile compatibility.

 

Personal preference on that one. However, desktop users are by far the majority.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

what the fuck happened in this thread.

The short answer?

 

A few too many people latched onto the "lolz derp AMD cardz R 2 HOT 4 U!" bandwagon.

 

And then there was a sub-argument between OpenCL vs CUDA and how apparently, these results are bullshit.

 

So yeah. Our community killed it. Yaaaayyyy.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

The short answer?

 

A few too many people latched onto the "lolz derp AMD cardz R 2 HOT 4 U!" bandwagon.

 

And then there was a sub-argument between OpenCL vs CUDA and how apparently, these results are bullshit.

 

So yeah. Our community killed it. Yaaaayyyy.

Didn't even bother reading ANYTHING else, I'm just going to assume that this sums everything up lol.

Link to comment
Share on other sites

Link to post
Share on other sites

So yeah. Our community killed it. Yaaaayyyy.

 

It's a damn shame too. I remember when this forum was mostly mature and helpful, I would go on and on about how much I loved it. Now... Not at all.

waffle waffle waffle on and on and on

Link to comment
Share on other sites

Link to post
Share on other sites

Is this the reason why everyone is using only AMD for folding and that kind of stuff?

INTEL Core i7-4790K  ASUS Maximus VII Ranger  CORSAIR Vengeance Pro 8GB 2133MHz  EVGA GeForce GTX 980 Ti  SAMSUNG 850 EVO 250GB  CORSAIR AX860i  CORSAIR Obsidian 750D

Link to comment
Share on other sites

Link to post
Share on other sites

More then half of the people in this thread are complaining about heat. I mean come on seriously how educated are people on here if they think TDP equates to Power Consumption.

 

TDP often is quite near to maximum or near maximum power consumption. Is that really a surprise?

Mini-Desktop: NCASE M1 Build Log
Mini-Server: M350 Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

The short answer?

 

A few too many people latched onto the "lolz derp AMD cardz R 2 HOT 4 U!" bandwagon.

 

And then there was a sub-argument between OpenCL vs CUDA and how apparently, these results are bullshit.

 

So yeah. Our community killed it. Yaaaayyyy.

That is not all it came down to, and we all know the rated and actual power draws of AMD's chips are well above Nvidia's. We know AMD's rated compute is higher than AMD's, but the actual workload compute is in Nvidia's favor. When you build a supercomputer, the 3 things which matter most economically go something like this: long-term cost of electricity, up front datacenter construction costs, and the revenues to be made through client usage of compute resources. Over the long term the costs of electricity far exceed the construction costs. This is because for every watt spent on computing, you will have to spend at least 1.25 watts to keep resources cool, and this is under Google's state of the art construction. Most centers don't get much below 1.5-1.4. Now, this wouldn't be a big problem if you can keep your centers fully loaded by client usage, but you can't.

 

AMD runs warm, with or without aftermarket cooling solutions. The reason it's impossible for AMD to have won this contest in a fair fight is exactly because the nvidia-based competition was forced to run code which is inferior for Nvidia platforms. All the tests were run in OpenCL, which, while being platform agnostic in the accelerator world, is also paid little attention to by Nvidia which invests in its own proprietary language called CUDA. No CUDA-equivalent code was used, and the performance difference is about 12-17% on modern platforms depending on the workload. Currently it's a farce for AMD to win an efficiency award in supercomputing, especially given Google doesn't use a single AMD chip anywhere in its datacenters or supercomputers, and it's the king of efficient datacenter construction.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Is this the reason why everyone is using only AMD for folding and that kind of stuff?

Excuse me? I fold on my MBPr with its GTX 750m TYVM.

 

The main reason is because folding has no CUDA implementation and for some reason has an issue with Intel's iGPU.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not sure why anyone's remotely surprised by this there's a reason they were being used for coin mining.. If it said it was the most power efficient gaming GPU we'd all know they were lying but for general compute tasks I'm not shocked at all.

| CPU: i7-4770K @4.6 GHz, | CPU cooler: NZXT Kraken x61 + 2x Noctua NF-A14 Industrial PPC PWM 2000RPM  | Motherboard: MSI Z87-GD65 Gaming | RAM: Corsair Vengeance Pro 16GB(2x8GB) 2133MHz, 11-11-11-27(Red) | GPU: 2x MSI R9 290 Gaming Edition  | SSD: Samsung 840 Evo 250gb | HDD: Seagate ST1000DX001 SSHD 1TB + 4x Seagate ST4000DX001 SSHD 4TB | PSU: Corsair RM1000 | Case: NZXT Phantom 530 Black | Fans: 1x NZXT FZ 200mm Red LED 3x Aerocool Dead Silence 140mm Red Edition 2x Aerocool Dead Silence 120mm Red Edition  | LED lighting: NZXT Hue RGB |

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not sure why anyone's remotely surprised by this there's a reason they were being used for coin mining.. If it said it was the most power efficient gaming GPU we'd all know they were lying but for general compute tasks I'm not shocked at all.

The reason they were used for bitcoin mining is AMD's architecture lent itself well to the hash algorithms. Some instructions for them, which are near useless for other workloads, run in a single cycle vs. Nvidia's 2. Don't spout about false equivalencies when you have no knowledge of the context.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The reason they were used for bitcoin mining is AMD's architecture lent itself well to the hash algorithms. Some instructions for them, which are near useless for other workloads, run in a single cycle vs. Nvidia's 2. Don't spout about false equivalencies when you have no knowledge of the context.

I don't believe I spouted anything about false equivalencies or even mentioned another card what so ever so your argument is invalid  

| CPU: i7-4770K @4.6 GHz, | CPU cooler: NZXT Kraken x61 + 2x Noctua NF-A14 Industrial PPC PWM 2000RPM  | Motherboard: MSI Z87-GD65 Gaming | RAM: Corsair Vengeance Pro 16GB(2x8GB) 2133MHz, 11-11-11-27(Red) | GPU: 2x MSI R9 290 Gaming Edition  | SSD: Samsung 840 Evo 250gb | HDD: Seagate ST1000DX001 SSHD 1TB + 4x Seagate ST4000DX001 SSHD 4TB | PSU: Corsair RM1000 | Case: NZXT Phantom 530 Black | Fans: 1x NZXT FZ 200mm Red LED 3x Aerocool Dead Silence 140mm Red Edition 2x Aerocool Dead Silence 120mm Red Edition  | LED lighting: NZXT Hue RGB |

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×