Jump to content

what to pick for machine learning

adarw

my sis is currently doing her phd she has the option for google cloud machine learning or geting a gpu (she would probly have to pay for the gpu) so i found a tesla m2075 6gb (you barly find anything on the internet about it) for 50 bucks and i also found a 1060 6gb but it has a clock of 139MHz (a sensor is broken) what do yall susgest?

|:Insert something funny:|

-----------------

*******

#

Link to comment
Share on other sites

Link to post
Share on other sites

Id go for the cloud gpu option. Whats the disadvantage to this option? 

 

That tesla seems like fermi, and those are well out of support now and most programs won't work with them now.

 

THe 1060 seems a bit too limited.

 

Do they suggest gpus? If you don't care about speed, you can normally do it all on cpu.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, adarw said:

a sensor is broken

I doubt that. There are barely any sensors on that card that cause it to throttle to those speeds. That card is 100% messed with (shitty shunt mod attempt is what I have in mind) or damaged gpu core wise.

mY sYsTeM iS Not pErfoRmInG aS gOOd As I sAW oN yOuTuBe. WhA t IS a GoOd FaN CuRVe??!!? wHat aRe tEh GoOd OvERclok SeTTinGS FoR My CaRd??  HoW CaN I foRcE my GpU to uSe 1o0%? BuT WiLL i HaVE Bo0tllEnEcKs? RyZEN dOeS NoT peRfORm BetTer wItH HiGhER sPEED RaM!!dId i WiN teH SiLiCON LotTerrYyOu ShoUlD dEsHrOuD uR GPUmy SYstEm iS UNDerPerforMiNg iN WarzONEcan mY Pc Run WiNdOwS 11 ?woUld BaKInG MY GRaPHics card fIX it? MultimETeR TeSTiNG!! aMd'S GpU DrIvErS aRe as goOD aS NviDia's YOU SHoUlD oVERCloCk yOUR ramS To 5000C18

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Electronics Wizardy said:

Id go for the cloud gpu option. Whats the disadvantage to this option? 

 

That tesla seems like fermi, and those are well out of support now and most programs won't work with them now.

 

THe 1060 seems a bit too limited.

 

Do they suggest gpus? If you don't care about speed, you can normally do it all on cpu.

i guess the cloud thing might be the best, would a 1060 with normal clock be ok to use aswell? im going to buy one for myself soon. 

 

2 minutes ago, Levent said:

I doubt that. There are barely any sensors on that card that cause it to throttle to those speeds. That card is 100% messed with (shitty shunt mod attempt is what I have in mind) or damaged gpu core wise.

oh lol i just saw the post and i guess he did try and mod it lol https://www.kijiji.ca/v-desktop-computers/ottawa/defective-pny-gtx-1060-6gb-gpu-accepting-offers/1579469063

|:Insert something funny:|

-----------------

*******

#

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, adarw said:

i guess the cloud thing might be the best, would a 1060 with normal clock be ok to use aswell? im going to buy one for myself soon. 

 

Reaally depends on the uses, but most programs shoudl at least work.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, adarw said:

i guess the cloud thing might be the best, would a 1060 with normal clock be ok to use aswell? im going to buy one for myself soon. 

GTX 1060 should be "okay" but as said before it depends on the usage, the most important aspect is the VRAM, a lot of midrange GPUs even from that era are fast enough for machine learning, but if the model doesn't fit in the VRAM then it's useless and there's no workaround for that, unless you train on the CPU which is much slower

 

Don't use any of the AMD cards as their support is severely lacking for machine learning

 

a GTX 1070 with its 8GB of VRAM is also a decent choice, if you can find one for a decent price

Quote or Tag people so they know that you've replied.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, .Apex. said:

GTX 1060 should be "okay" but as said before it depends on the usage, the most important aspect is the VRAM, a lot of midrange GPUs even from that era are fast enough for machine learning, but if the model doesn't fit in the VRAM then it's useless and there's no workaround for that, unless you train on the CPU which is much slower

 

Don't use any of the AMD cards as their support is severely lacking for machine learning

 

a GTX 1070 with its 8GB of VRAM is also a decent choice, if you can find one for a decent price

ive found a 1080 for 400 cad is that reasonable?

|:Insert something funny:|

-----------------

*******

#

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, adarw said:

ive found a 1080 for 400 cad is that reasonable?

Hmm, it would've been 2 years ago, but considering the GPU shortages I think that sounds alright?

 

To put it in perspective the GTX 1080 is around the performance of a regular RTX 2060 which costs around the same when new

Quote or Tag people so they know that you've replied.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, adarw said:

my sis is currently doing her phd she has the option for google cloud machine learning or geting a gpu (she would probly have to pay for the gpu) so i found a tesla m2075 6gb (you barly find anything on the internet about it) for 50 bucks and i also found a 1060 6gb but it has a clock of 139MHz (a sensor is broken) what do yall susgest?

Why not both? Use the 1060 for small, local tests and then the cloud ML (is it an actual compute instance, or just one of their managed services?) for heavy processing.

 

That tesla isn't that useful.

 

9 hours ago, adarw said:

i guess the cloud thing might be the best, would a 1060 with normal clock be ok to use aswell? im going to buy one for myself soon. 

Even a 1050 would be okay for small scale stuff, and 10x faster than using a CPU already.

 

8 hours ago, .Apex. said:

but if the model doesn't fit in the VRAM then it's useless and there's no workaround for that, unless you train on the CPU which is much slower

One can always workaround that by reducing batch sizes, using fp16 weights and other stuff, since she's doing a PhD one can assume she can try to get around those limitations.

8 hours ago, .Apex. said:

Don't use any of the AMD cards as their support is severely lacking for machine learning

By "severely lacking" you mean non-existent, right? 😆

 

6 hours ago, .Apex. said:

To put it in perspective the GTX 1080 is around the performance of a regular RTX 2060 which costs around the same when new

Given that the 2060 has tensorcores, if one can make use of half-precision the 2060 ends up being almost twice as fast as the 1080.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, igormp said:

Why not both? Use the 1060 for small, local tests and then the cloud ML (is it an actual compute instance, or just one of their managed services?) for heavy processing.

 

That tesla isn't that useful.

 

Even a 1050 would be okay for small scale stuff, and 10x faster than using a CPU already.

 

One can always workaround that by reducing batch sizes, using fp16 weights and other stuff, since she's doing a PhD one can assume she can try to get around those limitations.

By "severely lacking" you mean non-existent, right? 😆

 

Given that the 2060 has tensorcores, if one can make use of half-precision the 2060 ends up being almost twice as fast as the 1080.

cleard up a lot thanks 🙂 I guess i will do that, why does amd not support machine learning? dont they have more vram?

|:Insert something funny:|

-----------------

*******

#

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, adarw said:

cleard up a lot thanks 🙂 I guess i will do that, why does amd not support machine learning? dont they have more vram?

They do have more vram and nice hardware (although their consumer platform lacks the equivalent of tensorcores, only their instinct cards have that), but they software support is non-existent.

 

All of the big ML frameworks and tools usually rely on CUDA. AMD has a "compatibility software" (in reality it's a fully fledged compute stack) called ROCm, which Pytorch does support officially, but it has no support for Navi GPUs, and even getting it to run in older Vega and Polaris GPUs is not that straightforward, and performance is really disappointing (a Radeon VII performs about the same as my 2060Super).

 

There's also DirectML on Windows, but doing ML on windows is a pain and performance is WORSE than using a CPU.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, igormp said:

One can always workaround that by reducing batch sizes, using fp16 weights and other stuff, since she's doing a PhD one can assume she can try to get around those limitations.

I just meant in the sense that you run out of VRAM beyond those optimizations, there's no way around increasing or substituting your VRAM, even though the GTX 10 Series suffers from terrible FP16 performance, but otherwise it's a good point

Quote or Tag people so they know that you've replied.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×