Jump to content

AMD's Hawaii Is Officially The Most Efficient GPGPU In The World To Date, Tops Green500 List

It should only 30 minutes tops to write something up real quick. You spend more time on here than that every day.  ;)

oie_22164254iWzfT9JX.gif

For having the patience :) May i ask you to report him when he spews BS? its the only way to make him stop. community reporting, like we did with cokeman

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Because obviously all credit is due to the GPUs, right?

 

Spoiler

CPU:Intel Xeon X5660 @ 4.2 GHz RAM:6x2 GB 1600MHz DDR3 MB:Asus P6T Deluxe GPU:Asus GTX 660 TI OC Cooler:Akasa Nero 3


SSD:OCZ Vertex 3 120 GB HDD:2x640 GB WD Black Fans:2xCorsair AF 120 PSU:Seasonic 450 W 80+ Case:Thermaltake Xaser VI MX OS:Windows 10
Speakers:Altec Lansing MX5021 Keyboard:Razer Blackwidow 2013 Mouse:Logitech MX Master Monitor:Dell U2412M Headphones: Logitech G430

Big thanks to Damikiller37 for making me an awesome Intel 4004 out of trixels!

Link to comment
Share on other sites

Link to post
Share on other sites

Because obviously all credit is due to the GPUs, right?

yes. a 5960x is about 500GFLOPS of pure compute iirc. the K80 is 2.x TFLOPS ;) in a similar power envelope

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Because obviously all credit is due to the GPUs, right?

I'm sure they measured full system draw and quite cleverly isolated the GPUs through a number of tests. If they didn't that would render this entire study pointless. Then it would come down to the fans and liquid cooling since the most state of the art ratio of cooling joules to compute joules is 1.24 to 1, and that's for Google of all companies.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

yes. a 5960x is about 500GFLOPS of pure compute iirc. the K80 is 2.x TFLOPS ;) in a similar power envelope

For double or single? Single is 5.6 TFlops to 8.72. Also, didn't the K80 just recently launch? I don't think any living supercomputers yet use it.

 

http://www.nvidia.com/object/tesla-servers.html

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The only people who say they need OpenCL at this point use a Xeon Phi which is the king of mass-parallel accelerators for scientific computing. AMD has better theoretical flop numbers than Nvidia, but the gap between actual performance and real for AMD is huge compared to what the theoretical-real gap is for Nvidia.

just a heads up:

No they do not. I mean they use OpenCL and AMD cards, not Xeon Phis..... so you made a point against you?

 

I just do not understand why do you believe that you are better than some people who created this supercomputer for a specific task and they saw that OpenCL and AMD is the way to go for them.

Link to comment
Share on other sites

Link to post
Share on other sites

just a heads up:

No they do not. I mean they use OpenCL and AMD cards, not Xeon Phis..... so you made a point against you?

 

I just do not understand why do you believe that you are better than some people who created this supercomputer for a specific task and they saw that OpenCL and AMD is the way to go for them.

Not everyone who builds a supercomputer is a genius...

 

The only reason to buy AMD's accelerators is for cheaper compute on paper for the CFO keeping the books, until the extra heat wattage and electricity ends up costing them more later down the line (since supercomputers stand for more than a decade before being retired).

 

I spent an entire semester studying distributed computing, system construction, and all the odds and ends that go into designing a facility and a supercomputer. When you understand the shortcomings of AMD, you then understand why Nvidia and Intel have them completely outclassed in market share even in the supercomputing realm.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Not everyone who builds a supercomputer is a genius...

 

I spent an entire semester studying distributed computing, system construction, and all the odds and ends that go into designing a facility and a supercomputer. When you understand the shortcomings of AMD, you then understand why Nvidia and Intel have them completely outclassed in market share even in the supercomputing realm.

Okay, so you are saying that you are a better supercomputer expert after 1 semester study, rather than someone who does it for a living for years?

 

you have some self esteem mate

Link to comment
Share on other sites

Link to post
Share on other sites

I spent an entire semester studying distributed computing, system construction, and all the odds and ends that go into designing a facility and a supercomputer. When you understand the shortcomings of AMD, you then understand why Nvidia and Intel have them completely outclassed in market share even in the supercomputing realm.

ef36c195d8f4e91ff137a83d5db19573608b8da7

 

I spent 2 years studying general relativity and quantum mechanics. doenst make me an expert in the field. what makes you an expert is the experience. which you lack. now go back to your books

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Not that amd doesn't deserve some credit. but, Isn't the title abit sensationalist?

Unless i am misunderstanding something here. Feel free to clarify if you think so.

 

It isn't officially ranked as the most power efficient gpu, it makes up the most efficient super computer?

meaning the two aren't entirely unrelated. Still, one statement doesn't make the other true.

 

 

edit: hm.. in hindsight i probably shouldn't throw out silly comments in the middle of 4 page long angry discussions :P

Link to comment
Share on other sites

Link to post
Share on other sites

hey guys house burnt down due to how efficient this card is. Considering how much heat hawaii shits out this is surprising. 

 

 

Of course it's efficient! My 290 has two purposes. It plays my games and heats my room in the winter :P

Link to comment
Share on other sites

Link to post
Share on other sites

Open CL is just awesome :)

Especialy in a professional super computing envoirement.

If you think about it, even a consumer 290 has decent open CL performance.

On which if you are wanne use Cuda in a professional envoirement, you need to buy over expensive quadro cards.

Link to comment
Share on other sites

Link to post
Share on other sites

Not that amd doesn't deserve some credit. but, Isn't the title abit sensationalist?

Unless i am misunderstanding something here. Feel free to clarify if you think so.

 

It isn't officially ranked as the most power efficient gpu, it makes up the most efficient super computer?

meaning the two aren't entirely unrelated. Still, one statement doesn't make the other true.

 

In the second paragraph in the OP it's explained. Specifically referring to "gigaflops per watt" compute efficiency, not power efficiency. Two completely different things. ;)

 

"Efficiency" doesn't always automatically refer to power consumption, like so many people naturally assume.

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

I type one-handed while grading. furthermore, anything that can be written in 30 minutes is too trivial. Both companies have circuitry to handle vectors, FFTs, and a number of trivial operations in near constant time for a certain size. It's how all of these can be used in concert under OCL and CUDA that is the big differentiator, as I've said from the beginning.

You type one handed while you grade? Sounds logical from someone who's only 21 years old. You mean you type with on hand while jotting down Google answers on your take-home quiz.

 

We aren't talking about architecture but simply you stepping up with your expertise in which case you've only been eating your words from that point on.

 

yes. a 5960x is about 500GFLOPS of pure compute iirc. the K80 is 2.x TFLOPS ;) in a similar power envelope

The i7-5960x has 384 GFLOPS of peak performance. So yea, GPU's are number crunching monsters. Thus why AMD peruses the APU as much as they do.

 

For double or single? Single is 5.6 TFlops to 8.72. Also, didn't the K80 just recently launch? I don't think any living supercomputers yet use it.

 

http://www.nvidia.com/object/tesla-servers.html

The K80 isn't impressive at all. AMD's single GPU solution knocks on it's back door in terms of compute performance while both being on the same node.

Link to comment
Share on other sites

Link to post
Share on other sites

Of course it's efficient! My 290 has two purposes. It plays my games and heats my room in the winter :P

lol i do that with my 7870 xt, i can only imagine how fast your gpus must heat your room.

cpu: intel i5 4670k @ 4.5ghz Ram: G skill ares 2x4gb 2166mhz cl10 Gpu: GTX 680 liquid cooled cpu cooler: Raijintek ereboss Mobo: gigabyte z87x ud5h psu: cm gx650 bronze Case: Zalman Z9 plus


Listen if you care.

Cpu: intel i7 4770k @ 4.2ghz Ram: G skill  ripjaws 2x4gb Gpu: nvidia gtx 970 cpu cooler: akasa venom voodoo Mobo: G1.Sniper Z6 Psu: XFX proseries 650w Case: Zalman H1

Link to comment
Share on other sites

Link to post
Share on other sites

hey guys house burnt down due to how efficient this card is. Considering how much heat hawaii shits out this is surprising. 

 

Hell_Girl_Jigoku_Shoujo_Ai_Enma_Episode_

Heat does not always mean lots of power. I could put a single core sempron in my pc without a heat sink. It would melt and burn Does that use lots of power?

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

Heat does not always mean lots of power. I could put a single core sempron in my pc without a heat sink. It would melt and burn Does that use lots of power?

trust me, hawaii uses power a decent amount of it. This just seems suprising, im unsure about how this works out, i guess the gpu side of things isnt the whole story

cpu: intel i5 4670k @ 4.5ghz Ram: G skill ares 2x4gb 2166mhz cl10 Gpu: GTX 680 liquid cooled cpu cooler: Raijintek ereboss Mobo: gigabyte z87x ud5h psu: cm gx650 bronze Case: Zalman Z9 plus


Listen if you care.

Cpu: intel i7 4770k @ 4.2ghz Ram: G skill  ripjaws 2x4gb Gpu: nvidia gtx 970 cpu cooler: akasa venom voodoo Mobo: G1.Sniper Z6 Psu: XFX proseries 650w Case: Zalman H1

Link to comment
Share on other sites

Link to post
Share on other sites

ef36c195d8f4e91ff137a83d5db19573608b8da7

I spent 2 years studying general relativity and quantum mechanics. doenst make me an expert in the field. what makes you an expert is the experience. which you lack. now go back to your books

The difference being Quantum Physics is ever more theoretical and unproven. At least with supercomputer design and construction there are proven constants/variables that don't require much actual experience to evaluate and figure out how to put together cost effectively for a given workload while maximizing performance.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You type one handed while you grade? Sounds logical from someone who's only 21 years old. You mean you type with on hand while jotting down Google answers on your take-home quiz.

We aren't talking about architecture but simply you stepping up with your expertise in which case you've only been eating your words from that point on.

The i7-5960x has 384 GFLOPS of peak performance. So yea, GPU's are number crunching monsters. Thus why AMD peruses the APU as much as they do.

The K80 isn't impressive at all. AMD's single GPU solution knocks on it's back door in terms of compute performance while both being on the same node.

Again, that is only in theoretical, not actual.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

In the long run, how is that going to work though? Will we need LN2 at some point?

*thinks of following scenario*

 

dad: "SON! get your gloves and goggles, i need to use the pc"

May the light have your back and your ISO low.

Link to comment
Share on other sites

Link to post
Share on other sites

10/10

 

Would bang SEVERAL TIMES!

i7 5930k . 16GB Corsair Vengeance LPX 2666 DDR4 . Gigabyte GA-X99-Gaming G1-WIFI . Zotac GeForce GTX 980 AMP! 4GB SLi . Crucial M550 1TB SSD . LG BD . Fractal Design Define R2 Black Pearl . SuperFlower Leadex Gold 750w . BenQ GW2765HT 2560x1440 . CM Storm QF TK MX Blue . SteelSeries Rival 
i5 2500k/ EVGA Z68SLi/ FX 8320/ Phenom II B55 x4/ MSI 790FX-GD70/ G.skill Ripjaws X 1600 8GB kit/ Geil Black Dragon 1600 4GB kit/ Sapphire Ref R9 290/ XFX DD GHOST 7770 
Link to comment
Share on other sites

Link to post
Share on other sites

The difference being Quantum Physics is ever more theoretical and unproven. At least with supercomputer design and construction there are proven constants/variables that don't require much actual experience to evaluate and figure out how to put together cost effectively for a given workload while maximizing performance.

Gravity is theorethical and unproven. i dont see you jumping out of a 10 story window to test it (sadly)

 

I genuinly hate you retards who say something is just a theory. no. its A THEORY. and a theory is the best explanation we have so far for the effect. and one that has been challenged and stood op to the challenge. 

see the theory is an explanation for an effect. this means quantum mechanics are our best explanation for effects that we OBSERVE and use in real life. everything you FUCKING TALK ABOUT is based on quantum mechanics and mostly tunneling...

 

in quantum mechanics there are "proven" (nothing is ever proven, not even your constants, one miscase, and theyre dead) constants and there is stuff you can evaluate with 1 semester. but damn sure it doesnt mean that youre an expert in the field. further on, id say even when you do your PhD youre not even close to an expert.

 

and all of this applies to making supercomputer... yes you can know some technical aspects of it, but after one semester of some subject, you are not even close to knowing enough about it to be able to challenge people who have worked in teh field for years. so dont do it, it pisses everyone off. if you came to me when i was making a model, and started pissing on my technique, id not tell you to get lost. i would personally kick your ass out of the place. and i can tell you thats exactly what those people would do to you if you were to go spit your BS into their face. but maybe they would do it more politely, since they dont want a PR disaster

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

trust me, hawaii uses power a decent amount of it. This just seems suprising, im unsure about how this works out, i guess the gpu side of things isnt the whole story

Fermi is a much stronger compute architecture than Kepler is.

 

Downside is more compute = higher power draw as its a more complicated pipeline etc.

 

At least that's what I seem to be reading online.

Link to comment
Share on other sites

Link to post
Share on other sites

makes sense, the hawaii gpu adjusts power draw 100,000 times per second based on load, drawing a sporadic current from the capacitors, while actual power draw through the PSU appears stable. Its a cpu technology that's more than a few years old, and Nvidia through their work on Tegra finally figured out how to do the same with Maxwell as AMD does with Power Tune.... even better perhaps because of the secrets Nvidia gleaned from working with ARM technology, not the actual hardware.

 

*edit* I should add that my PC at idle pulls 130-140 watts from the wall, and with two R9 290's clocked at 1100Mhz while folding@home, my PC is pulling ~475 watts from the wall. calculate in efficiency of my PSU and thats roughly 296 Actual watts of power being used by two R9 290's, for about 350-450 PPD (depending on the project). not shabby.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

Fermi is a much stronger compute architecture than Kepler is.

 

Downside is more compute = higher power draw as its a more complicated pipeline etc.

 

At least that's what I seem to be reading online.

That is one part of it, but there are many more sides to the story than just the pipeline. you have the hardware engines, the design of the cores (shaders, cudas, stream processors), the manufacturing process (32nm on the same arch would be different to 14nm FinFET+)

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×