Jump to content

ECC v non-ECC RAM for Threadripper 3970X

I’m building a high end desktop for optical design and CAD work.  Ray trace optical design is very memory intensive, so I need 256GB.

 

My question is what is the real world difference between ECC and non-ECC RAM?  I have been doing this type of work for a few years on an Intel x99 system and have only encountered BSD twice.  Comments and suggestions appreciated.

 

Basic build:

MOBO:  ASUS TRX40-XE

CPU:   Threadripper 3970X

RAM:  256GB

GPU:  Quadra RTX 6000

Boot:  Samsung 980 Pro 512GB

Data:  Samsung 980 Pro 1TB

Link to comment
Share on other sites

Link to post
Share on other sites

ECC lowers the chance of memory errors, and makes it almost impossible for memory to be corrupted. The problem with non ecc is you have no idea if there has been a memory error. 

 

How much extra is ecc? Id get it if it isn't too much more. 

Link to comment
Share on other sites

Link to post
Share on other sites

ECC ram has an "error correcting chip" in it that detects corrupted data and automatically corrects it.

 

also have you thought about an rtx 3080, or 3090 over a quadro 6000? or two?

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 auros pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I could be wrong, but, ECC is ECC and non-ECC is non-ECC and never the tween shall meet...
It is your mobo that decides what it will accept, at least it used to be that way.  Check the Qualified vendor list for what types will work. ECC is usually for servers and mainframes meant to do millions of tiny computations rather than fewer large computations.

It must be true, I read it on the internet...

Link to comment
Share on other sites

Link to post
Share on other sites

ECC is for errors that occur in memory, bit flips and the like. In a normal desktop environment though the type of errors ECC protects against doesn't really help the application. It's more for servers and server type applications (File Servers, Web Servers, Database Servers, VM servers, etc)

Link to comment
Share on other sites

Link to post
Share on other sites

ECC is used in mission critical fields like servers, medical, financial transactions. It sounds like you are building a workstation. If you feel you need to guard against a 0.01% chance of a memory corruption then get ECC.

 

But there are lots of other things that will cause BSODs so don't expect it to eliminate that.

 

ECC will be a few percent slower and will cost more.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Letgomyleghoe said:

ECC ram has an "error correcting chip" in it that detects corrupted data and automatically corrects it.

 

also have you thought about an rtx 3080, or 3090 over a quadro 6000? or two?

RTX 3080 or 3090 is cheaper and more powerful, but some of my work involves SOLIDWORKS which requires a Quadro for GPU acceleration.  In an ideal world it wouldn’t matter, so I’m stuck paying a premium for the Quadra.  For the cost of the 6000 (and its 4,608 CUDA cores), I could put in dual 3090’s and have just under 21,000 CUDA cores!

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, shoutingsteve said:

I could be wrong, but, ECC is ECC and non-ECC is non-ECC and never the tween shall meet...
It is your mobo that decides what it will accept, at least it used to be that way.  Check the Qualified vendor list for what types will work. ECC is usually for servers and mainframes meant to do millions of tiny computations rather than fewer large computations.

I checked the QVL and no ECC suppliers listed (even though the MOBO supports it)

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, dilpickle said:

ECC is used in mission critical fields like servers, medical, financial transactions. It sounds like you are building a workstation. If you feel you need to guard against a 0.01% chance of a memory corruption then get ECC.

 

But there are lots of other things that will cause BSODs so don't expect it to eliminate that.

 

ECC will be a few percent slower and will cost more.

A work at home station actually.  The benefits of working at home means I can use all the horsepower for gaming and the system is a business deduction!

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, LED_Guy said:

RTX 3080 or 3090 is cheaper and more powerful, but some of my work involves SOLIDWORKS which requires a Quadro for GPU acceleration.  In an ideal world it wouldn’t matter, so I’m stuck paying a premium for the Quadra.  For the cost of the 6000 (and its 4,608 CUDA cores), I could put in dual 3090’s and have just under 21,000 CUDA cores!

I'm going to save you tons of money: solidworks doesn't require quadro for GPU acceleration: THEY RECOMMEND IT.

It must be true, I read it on the internet...

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, shoutingsteve said:

I'm going to save you tons of money: solidworks doesn't require quadro for GPU acceleration: THEY RECOMMEND IT.

SOLIDWORKS + SOLIDWORKS Flow (thermal/CFD simulation).  If that doesn’t benefit from a Quadro then I’ll gladly throw in a 3090 (or 2).  Trust me I’d love to save the money and throw the extra CUDA cores at some games and F@H.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, LED_Guy said:

SOLIDWORKS + SOLIDWORKS Flow (thermal/CFD simulation).  If that doesn’t benefit from a Quadro then I’ll gladly throw in a 3090 (or 2).  Trust me I’d love to save the money and throw the extra CUDA cores at some games and F@H.

The main thing is the open GL.  But, it does look like the stopping point is the program itself saying "nope, you have a non-commercial gpu... screw you." rather than actual performance.  I use maya and vectorworks a lot, and they both have gone through the same "certification check" in their pasts.  The CPU runs harder, but it makes it through even without the GPU acceleration.

It must be true, I read it on the internet...

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, shoutingsteve said:

The main thing is the open GL.  But, it does look like the stopping point is the program itself saying "nope, you have a non-commercial gpu... screw you." rather than actual performance.  I use maya and vectorworks a lot, and they both have gone through the same "certification check" in their pasts.  The CPU runs harder, but it makes it through even without the GPU acceleration.

Well, I will have a few more cores/threads to throw at any workload.  My current system is based on an i7-5960X.  I fear I will be terribly spoiled and have trouble using my laptop which has a mere 6 cores . . .

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, LED_Guy said:

Well, I will have a few more cores/threads to throw at any workload.  My current system is based on an i7-5960X.  I fear I will be terribly spoiled and have trouble using my laptop which has a mere 6 cores . . .

Solidworks is a single core workhorse.  It might run a few threads, but the majority of the rendering is done on a single core as it is done in a linear fashion. A leads to B which leads to C.  Unless they have totally redesigned it since i last used it (which was 2014, so I could be outdated here.)

It must be true, I read it on the internet...

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, shoutingsteve said:

Solidworks is a single core workhorse.  It might run a few threads, but the majority of the rendering is done on a single core as it is done in a linear fashion. A leads to B which leads to C.  Unless they have totally redesigned it since i last used it (which was 2014, so I could be outdated here.)

SW has updated.  There is a distinct difference in rendering times on my laptop vs desktop which does not correlate to single core clock speed.

Link to comment
Share on other sites

Link to post
Share on other sites

Can you tolerate memory error? even if it's only like 0.00001% chance?

If not get ECC, with it's caveat, which is generally slower speed and higher latency (which comes from the error checking mechanism).

4 hours ago, LED_Guy said:

SOLIDWORKS + SOLIDWORKS Flow (thermal/CFD simulation).  If that doesn’t benefit from a Quadro then I’ll gladly throw in a 3090 (or 2).  Trust me I’d love to save the money and throw the extra CUDA cores at some games and F@H.

If you the work doesn't require quadro, then go with dual 3090. 21k cuda cores is no joke.

Can you wait next year? maybe mid 2021? and go with TR5000 instead.

Ryzen 5700g @ 4.4ghz all cores | Asrock B550M Steel Legend | 3060 | 2x 16gb Micron E 2666 @ 4200mhz cl16 | 500gb WD SN750 | 12 TB HDD | Deepcool Gammax 400 w/ 2 delta 4000rpm push pull | Antec Neo Eco Zen 500w

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, LED_Guy said:

SW has updated.  There is a distinct difference in rendering times on my laptop vs desktop which does not correlate to single core clock speed.

So the official update today, SW is mainly single thread.  If you render a part and want to rotate/zoom/translate then the rendering uses additional threads OR GPU acceleration.  I'm not doing rendering videos (only static images) - that's not enough of a reason to spend the $ on a Quadro.

 

I also found out that SW Flow isn't coded for GPU acceleration.  Good thing I will have a few cores to spare . . .

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×