Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About FurleyBustard

  • Title
  1. A chart showing the average fps should be showing an average of *all* of the fps measurements taken throughout a given run. For example, if you have a 2 minute run you are doing, and your measurement tool samples the fps every second, then you'll have a total of 120 measurements taken during the run. An "average" should be the average of *all* of those 120 measurements. For consistency, it's best to do a run about 3 times, then average together those three averages. I might be misunderstanding your question, though...
  2. I imagine they'll re-run at least some of the benchmarks using the new drivers, but given how quickly things are being updated right now (bugs, optimizations, etc) I wouldn't blame them if they waited for things to stabilize a bit before redoing the entire benchmark. I'd guess we'll also see some Crossfire/SLI results eventually, but those *have* to wait for the drivers to get updated. (Since multi-gpu solutions often don't work well at all until custom driver profiles have been created for a given game.)
  3. That's exactly right. If a person is trying to decide on a video card to get, sees a benchmark that says a 280X performs fine and then goes out a buys one, there is no guarantee that they'll be able to get their own 280X to actually be as fast as the one in the benchmark. Due to that discrepancy, it's just not a good idea to do benchmarks using custom overclocks *unless* you also include the factory clock rates in the benchmark for completeness. As far as them overclocking in general, I agree, I like them doing it for an *overclocking* test to see just how high they can get a card to go, but not really for benchmarks (unless stock clocks are also included in the benchmarks).