Jump to content

Nvidia announces new Tesla Accelerators

AlTech

What, no blatant anti-Nvidia/pro-AMD comments in this thread?

My my.

Link to comment
Share on other sites

Link to post
Share on other sites

A 250W passive cooler?

I smell throttling...

No. Server cases have very very high ariflow.  And they will be rated to run at 100 celcius at boost. 

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

Link to comment
Share on other sites

Link to post
Share on other sites


sad fact though... even the K40 is like 0.7TFLOPS slower in FP64 then the FirePro W9100....

 


oh, for those who think FuryX is beast in FP64, it is not. Fiji was not designed like Hawaii, so they do not have 1/2 FP32 performance, instead they have 1/4 FP32...

 


i hope Pascal will give us proper compute on Nvidias side... cuz maxwell isnt that good :|


Except it isn't in real world performance. Theoretically, sure, but AMD can't get its compute drivers up to par.

 

Pascal is literally Maxwell with the bells and whistles thrown back in that were taken out to deal with 20nm not happening.


They look great but that double precision performance is terrible compared to last gen. Nvidia pls fix

There's nothing to fix for the purposes Nvidia intends.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Lok at dat FP64 number!

 

All Hail Maxwell!!! (not)

Look at that single thread performance too, if we are on the topic of metrics that make no sense for the usecase ;)

 

And it's being marketed at non-FP64 workloads, so it's no problem. AI doesn't need FP64. 

Exactly. IIRC it actually favours the double FP16 mode CUDA can do, for twice the throughput.

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

sad fact though... even the K40 is like 0.7TFLOPS slower in FP64 then the FirePro W9100....

 

 

oh, for those who think FuryX is beast in FP64, it is not. Fiji was not designed like Hawaii, so they do not have 1/2 FP32 performance, instead they have 1/4 FP32...

 

 

i hope Pascal will give us proper compute on Nvidias side... cuz maxwell isnt that good :|

 

 

They look great but that double precision performance is terrible compared to last gen. Nvidia pls fix 

Why do you all crap about FP64 when its literally a hindrance for the applications these cards will be used for?

 

Oh i know, stupid fanboys on the internet not researching shit

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Skynet

pls this is 2015 its Genysis

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

 

Why do you all crap about FP64 when its literally a hindrance for the applications these cards will be used for?

 

Oh i know, stupid fanboys on the internet not researching shit


tell me which professional grade applications that is hurt or made inefficient to the point of being un-productive when running in FP64 mode.

 

 

Except it isn't in real world performance. Theoretically, sure, but AMD can't get its compute drivers up to par.

 

Pascal is literally Maxwell with the bells and whistles thrown back in that were taken out to deal with 20nm not happening.


Drivers can be improved, hardware itself, that is not so easy to improve once it is made.
Link to comment
Share on other sites

Link to post
Share on other sites

tell me which professional grade applications that is hurt or made inefficient to the point of being un-productive when running in FP64 mode.

machine learning. you dont need precision, you need speed. FP16 is twice as fast as even FP32 on maxwell and kepler. 

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

machine learning. you dont need precision, you need speed. FP16 is twice as fast as even FP32 on maxwell and kepler. 

oh, oh hey let me highlight some shit for you:

 

 

sad fact though... even the K40 is like 0.7TFLOPS slower in FP64 then the FirePro W9100....

 

 

oh, for those who think FuryX is beast in FP64, it is not. Fiji was not designed like Hawaii, so they do not have 1/2 FP32 performance, instead they have 1/4 FP32...

 

 

i hope Pascal will give us proper compute on Nvidias side... cuz maxwell isnt that good :|

post-89336-0-73243800-1447336028.png

 

what does it say below the K40.

 

does it say? does it say compute?

mein gott, it says COMPUTE.... a use case where they actually need FP64 for many tasks.....

 

i will accept your apology

Link to comment
Share on other sites

Link to post
Share on other sites

Wait, a dedicated high end compute card with a massive 0.21 TFLOPS? Maxwell really is a joke, when it comes to compute. I thought they were going to skip Maxwell entirely in the professional market?! Oh well.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Oh i know, stupid fanboys on the internet not researching shit

 

If you are taking time to make replies like these on LTT forum, you must be exhausted!

Link to comment
Share on other sites

Link to post
Share on other sites

oh, oh hey let me highlight some shit for you:

 

 

 

 

what does it say below the K40.

 

does it say? does it say compute?

mein gott, it says COMPUTE.... a use case where they actually need FP64 for many tasks.....

 

i will accept your apology

I do not see any material advertising this as a compute coprocessor. All i see is machine learning and execution. neither of these care about precision, but rather speed of exectution. Neural nets are known for this.

 

Wait, a dedicated high end compute card with a massive 0.21 TFLOPS? Maxwell really is a joke, when it comes to compute. I thought they were going to skip Maxwell entirely in the professional market?! Oh well.

Just that its not a compute card. Its meant for machine learning. meaning FP64 is useless

 

If you are taking time to make replies like these on LTT forum, you must be exhausted!

eh, too tired to care. ill probably call him names next time around and get a warning point

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Isn't the K80 better than this new M40?

for heavy compute, yes. but this isnt made for compute tasks. see my other posts

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

I do not see any material advertising this as a compute coprocessor. All i see is machine learning and execution. neither of these care about precision, but rather speed of exectution. Neural nets are known for this.

 

Just that its not a compute card. Its meant for machine learning. meaning FP64 is useless

 

eh, too tired to care. ill probably call him names next time around and get a warning point

you dont want to see.

 

M40 and M4 is for machine learning. I was talking of the Nvidia Tesla K40

 

Here, let me show you.

 

 

G7hOKGu.png

 

 

Again, i accept and i expect your apology

Link to comment
Share on other sites

Link to post
Share on other sites

Why do you all crap about FP64 when its literally a hindrance for the applications these cards will be used for?

 

Oh i know, stupid fanboys on the internet not researching shit

 

omfghfgdsfsaf ur sutch a nvidiot omfg y dont u do ur own resertch m80 clerly u dont no wut ur talkn abawt

Link to comment
Share on other sites

Link to post
Share on other sites

you dont want to see.

 

M40 and M4 is for machine learning. I was talking of the Nvidia Tesla K40

 

Here, let me show you.

 

 

G7hOKGu.png

 

 

Again, i accept your apology

Thats a Kepler GK110 based card.

 

omfghfgdsfsaf ur sutch a nvidiot omfg y dont u do ur own resertch m80 clerly u dont no wut ur talkn abawt

thanks for the insight love :3

Edited by Godlygamer23

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

for heavy compute, yes. but this isnt made for compute tasks. see my other posts

The k80 is better in every metric on the table though, so I don't see how the M40 would be a better choice.

Link to comment
Share on other sites

Link to post
Share on other sites

that table compares the cards according to compute stats. those are kinda not relevant for the maxwell ones, but oh well, If you were to look at density and speed in FP16 and integer, the maxwell one wins

The k80 is better in every metric on the table though, so I don't see how the M40 would be a better choice.

 

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

@Patricjp93

i expected the professional maxwell to be 1/16th FP64.... why did they nerf it this badly? I know the architecture itself werent focused on FP64, but this seems a bit harsh, even for professional grade stuff.

Link to comment
Share on other sites

Link to post
Share on other sites

I do not see any material advertising this as a compute coprocessor. All i see is machine learning and execution. neither of these care about precision, but rather speed of exectution. Neural nets are known for this.

 

Just that its not a compute card. Its meant for machine learning. meaning FP64 is useless

 

eh, too tired to care. ill probably call him names next time around and get a warning point

 

A GPU can do 2 things: Graphics processing and compute. What NVidia calls machine learning, IS by definition compute (technically graphics processing is compute too, but will not be called so). But as always, NVidia got their marketing down perfectly. They know Maxwell sucks at compute, so they call it something else. Even if this is for use case scenarios that doesn't require FP64 double precision, then it's still a bad compute card compared to AMD's offerings.

 

That in mind, if it's specifically for "machine learning" it might have specific drivers and software from NVidia included, but seeing it's compute, I highly doubt it.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

you dont want to see.

 

M40 and M4 is for machine learning. I was talking of the Nvidia Tesla K40

 

Here, let me show you.

 

 

 

 

 

Again, i accept and i expect your apology

And no one cares because AMD can't be bothered to make drivers good enough for OpenCL to compete with CUDA!

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×