Jump to content

Mersenne Trial Factoring

Bronzeman

Does anyone here run GPU TF for PrimeNet?

 

Prime95 is popular for OCers but I was wondering if anyone here pursues Mersenne primes, in particular the use of GPU for trial factoring.

 

I was hoping for feedback on experience from running different set ups, in particular using a CL as opposed to GUI OS and the effect of using different components with the same GPU.

 

I've been using Mfaktc, all its operations are GPU based so CPU and RAM config seem to have little effect on it.

 

Though a SSD in AHCI seemed to give a small increase of 14 GHz/day. I could be wrong as I "guessed" this after I moved a system with double the specs apart from the storage as it used a HDD. It caused a drop of 14 GHz/day despite all software being the same (OS and installed programs).

 

Anyone recommend a set up to maximize GPU efficiency?

Link to comment
Share on other sites

Link to post
Share on other sites

i do in boinc but, it handles all the configuring nor have i tried other setups. i have to say i know more about the primes than what my pc doing. i know there are faster way of getting primes but, i'm not sure which they use.  i would guess what ever puts less load on the gpu would result in faster calculations but, with the pace of technology we can just wait a few years and get even bigger primes faster using the same algorithm.   

for boinc i use prime grid.

Link to comment
Share on other sites

Link to post
Share on other sites

I'll have a peek at boinc, I vaguely recognise the name. I believe Prime95 is one of if not the fastest for finding Mersenne primes on a Intel CPU.

 

Although with Mfaktc you can't find Mersenne primes just find not Mersenne primes lol. It seems sound logic that reducing junk load to give more for the crunch would be wise.

 

Mfaktc uses CUDA and every time its been updated with the improvements of CUDA the performance has really jumped. I dare say the latest GPUs are waiting on the software to catch up.

Link to comment
Share on other sites

Link to post
Share on other sites

I'll have a peek at boinc, I vaguely recognise the name. I believe Prime95 is one of if not the fastest for finding Mersenne primes on a Intel CPU.

Although with Mfaktc you can't find Mersenne primes just find not Mersenne primes lol. It seems sound logic that reducing junk load to give more for the crunch would be wise.

Mfaktc uses CUDA and every time its been updated with the improvements of CUDA the performance has really jumped. I dare say the latest GPUs are waiting on the software to catch up.

that does seem like a fast way because half the numbers already gone factored by 2, leaving only checking odd numbers, there maybe a way to check those with sums but not sure.
Link to comment
Share on other sites

Link to post
Share on other sites

Yeah with the size the numbers are getting binary has alot of over head working with every single bit, though quantum is everyones wet dream. Which as you say by using none prime characteristics to eliminate seems to be effective to trim the number of exponents to be tested with Lucas-Lehmer.

 

GPU to 72 seems to be a big group now dedicating themselves to testing all exponents up to the 72nd bit (funny enough).

 

Memory access is a major factor for LL testing apparently, which seems sound. I wonder if there's a way to view latency for GPU memory? As I believe GPU memory is intended for working with larger files and thus a higher clock speed but I'm not sure about its other characteristics and could be wrong.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah with the size the numbers are getting binary has alot of over head working with every single bit, though quantum is everyones wet dream. Which as you say by using none prime characteristics to eliminate seems to be effective to trim the number of exponents to be tested with Lucas-Lehmer.

GPU to 72 seems to be a big group now dedicating themselves to testing all exponents up to the 72nd bit (funny enough).

Memory access is a major factor for LL testing apparently, which seems sound. I wonder if there's a way to view latency for GPU memory? As I believe GPU memory is intended for working with larger files and thus a higher clock speed but I'm not sure about its other characteristics and could be wrong.

gpuz will give you a read out of what the gpu is doing, you could probably do some math to get the numbers you're looking for, averages anyways. Besides encryption im not sure why were after such big primes, in a real world manner. I would think we have a decent idea of the distribution of them by now. Im not even sure how to frame primes in a quantum setting or how they would go about it, i do hear its the wet dream for math too though. I'm not sure it will help define the prime set. As for memory, i have no clue, i would have thought they could change base to have a bigger number be represented smaller, prime should be prime in any number system. This is a field i clearly know little about.
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×