Jump to content

Need a benchmark from a rig with dual Nvidia 1080 or better

Hi, 

 

Apologies for asking for favours in a first post! 9_9

 

I am looking for someone who has a dual gpu setup who would be willing to run a benchmark for me.  I am developing some software that may benefit from a dual GPU setup as I have maxed out my 1080 with it. Before I drop a large amount of money on a new system, I need to know if it will bring any benefits. Research has shown that it is a resounding maybe.....

 

Ideally I need a system that is as good as or better than:

  • Broadwell E i7 6850K
  • 2x Nvidia 1080
  • Each GPU running at 16x

 

I would also be interested in any setup that has 2x Nvidia 1080 or better, even if they are running in 8x mode.

 

The benchmark would involve me sending you a piece of software to run. It will display an fps counter. All I need is that number and your system specs.

 

Please let me know if you would be able to do this.

 

Many thanks.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, egomotion said:

Hi, 

 

Apologies for asking for favours in a first post! 9_9

 

I am looking for someone who has a dual gpu setup who would be willing to run a benchmark for me.  I am developing some software that may benefit from a dual GPU setup as I have maxed out my 1080 with it. Before I drop a large amount of money on a new system, I need to know if it will bring any benefits. Research has shown that it is a resounding maybe.....

 

Ideally I need a system that is as good as or better than:

  • Broadwell E i7 6850K
  • 2x Nvidia 1080
  • Each GPU running at 16x

 

I would also be interested in any setup that has 2x Nvidia 1080 or better, even if they are running in 8x mode.

 

The benchmark would involve me sending you a piece of software to run. It will display an fps counter. All I need is that number and your system specs.

 

Please let me know if you would be able to do this.

 

Many thanks.

 

 

You could just google benchmarks for whatever program you are using. I doubt anyone is just going to open some random file you send them on their extremely high end PC

Link to comment
Share on other sites

Link to post
Share on other sites

@Unexas. as it is custom software I have developed there is no benchmark to google, and I have not found benchmarks for anything similar. Hence my request.

 

I also appreciate people not wanting to run random code, so would be happy to provide whatever assurances people need.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, egomotion said:

 

@Unexas. as it is custom software I have developed there is no benchmark to google, and I have not found benchmarks for anything similar. Hence my request.

 

then it's also not gona support SLI, if you havent programmed it in yourself ;)

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, manikyath said:

then it's also not gona support SLI, if you havent programmed it in yourself ;)

That's not remotely how multi GPU for computation works.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, othertomperson said:

That's not remotely how multi GPU for computation works.

well.. compute for multi gpu doesnt work that way no, that's pretty much 100% down to the software to just talk to multiple GPU's, which i'm sure OP would know how well it's implemented in the software ;)

Link to comment
Share on other sites

Link to post
Share on other sites

@manikyath @othertomperson Like I said, research has shown a maybe. The GPUs get treated as a single large gpu, but there are gotchas that destroy the benefit of sli. From the Nvidia whitepapers I should get a performance gain using AFP as long as each frame is not reused (eg in temporal effects or image processing effects). I can only test the AFP performance gain on a dual gpu machine. That is one bottleneck I have - too much to render each frame. A faster card helps, but again a limit will be reached.

 

The main bottle neck I have is getting data from the card. I think having 2 cards each running at 16x should double that. Documentation and evidence is lacking here. Pulling data from the card may also utterly destroy the SLI benefits. Again, I need to test this as there is little evidence for the scenario I have.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×