Jump to content

AMD FX 8370 vs Intel I7 5960x [GTX 970 SLI 4K benchmarks]

Jesus, whats not to get? Writer of the article answer this on page 5 "Do me a favor, run a game of your choice and record the FPS in FRAPs make sure "min max avg" and "frame time variance" is checked. Once done, open the frame time variance file in FRAFS (google it) and look at the results. You'll notice that the 1% and 0.1% results are measured in milliseconds and FPS (both are the exact same values, just displayed differently) this is not at all the minimum or maximum FPS recorded."

 

Lower is better for the 0.1 and 1% results... 5960X is faster, whats the fuzz about?

Link to comment
Share on other sites

Link to post
Share on other sites

in milliseconds lower is better, in FPS higher is still better, it is calculating the number of frames per second that would be displayed at that frame timing. If the bottom 1% is 10ms thats 100FPS, if its 50ms thats 20FPS. 

Link to comment
Share on other sites

Link to post
Share on other sites

Ensure a job for life: https://github.com/Droogans/unmaintainable-code

Actual comment I found in legacy code: // WARNING! SQL injection here!

Link to comment
Share on other sites

Link to post
Share on other sites

in milliseconds lower is better, in FPS higher is still better, it is calculating the number of frames per second that would be displayed at that frame timing. If the bottom 1% is 10ms thats 100FPS, if its 50ms thats 20FPS. 

No. less variance is better. no matter FPS or milliseconds. less variance results in less stutter and a smoother experience. Exactly the opposite of what the first post and linked article say. Why someone decided to try to write a review without even knowing what the numbers mean is beyond me.

 

EDIT: This is important. I think a lot of people don't understand that the % numbers are VARIANCE.

Link to comment
Share on other sites

Link to post
Share on other sites

No. less variance is better. no matter FPS or milliseconds. less variance results in less stutter and a smoother experience. Exactly the opposite of what the first post and linked article say. Why someone decided to try to write a review without even knowing what the numbers mean is beyond me.

 

EDIT: This is important. I think a lot of people don't understand that the % numbers are VARIANCE.

 

I don't think you understand. There are multiple ways to calculate frame times, the way we do is by averaging the lowest 99 percent (1% low) and 99.9 (0.1% low) of frames, the results are displayed in FPS or milliseconds but are interchangeable as they mean exactly the same thing. We use FPS as it fits better in the graphs. Lower FPS (higher frame time) is bad, anything below 30 FPS is definitely noticeable stutter (or micro-stutter) and the lower down you go the more noticeable it is. Some users may be more sensitive to this and even notice it at higher numbers. You also generally don't want to see too much of a gap between these numbers (44 and 40 for instance is relatively good, depending on your average FPS while 50 and 22 is not). 

Link to comment
Share on other sites

Link to post
Share on other sites

No. less variance is better. no matter FPS or milliseconds. less variance results in less stutter and a smoother experience. Exactly the opposite of what the first post and linked article say. Why someone decided to try to write a review without even knowing what the numbers mean is beyond me.

 

EDIT: This is important. I think a lot of people don't understand that the % numbers are VARIANCE.

 

 

EDIT: What Donny said.

Link to comment
Share on other sites

Link to post
Share on other sites

This thread is golden in a sadistic sort of way...

 

I'm genuinely surprised it hasn't been locked DAYS ago.

Case: Corsair 4000D Airflow; Motherboard: MSI ZZ490 Gaming Edge; CPU: i7 10700K @ 5.1GHz; Cooler: Noctua NHD15S Chromax; RAM: Corsair LPX DDR4 32GB 3200MHz; Graphics Card: Asus RTX 3080 TUF; Power: EVGA SuperNova 750G2; Storage: 2 x Seagate Barracuda 1TB; Crucial M500 240GB & MX100 512GB; Keyboard: Logitech G710+; Mouse: Logitech G502; Headphones / Amp: HiFiMan Sundara Mayflower Objective 2; Monitor: Asus VG27AQ

Link to comment
Share on other sites

Link to post
Share on other sites

Those two CPU's in a head to head "Free to play" game suite for testing, at normal resolutions.

 

Many thousands of people, try these F2P games, and end up sticking with one or another over some time... I myself am guilty of playing 2-3 F2P titles at random nightly during my week.

So how does that go.... pretty badly for FX, still playable, but far from the actual optimal performance it should have,..I've tested many games myself, 1080p>1440p and single core performance is key with lighter titles.

Maximums - Asus Z97-K /w i5 4690 Bclk @106.9Mhz * x39 = 4.17Ghz, 8GB of 2600Mhz DDR3,.. Gigabyte GTX970 G1-Gaming @ 1550Mhz

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah but the chip itself may not be what caused the lag in frame rendering. It could have been memory controller, it could have been a call on system ram, it could have been a hiccup in t he driver on that architecture, it could have been with the game itself, Patrick, you yourself have made mention of how horrendous game programmers are, there is no coding situation that could lead to a more beneficial outcome on an AMD chip than an Intel one? YOU couldn't write a spat of code that would perform better specifically in an AMD CMT situation than an Intel SMT one?

This is one more anecdotal interesting blip to add to the myriad. And it gives us a SUPER interesting follow up to study, WHY did the Intel chip have those frame time issues?

8 cores to 8 cores SMT and CMT are equally powerful design principles up until the point you don't have the front end support for CMT. AMD's core designs are weaker. Intel's are far stronger. Intel's multi ore scaling is also much higher. While I could rig up something given enough time, it would take a fair bit of research and trial and error.

This result stinks, and this experiment needs replication by neutral 3rd parties.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Lol @ everyone freaking out, claiming this is BS. It's not. It's simply an extremely GPU-bound situation.

 

Run that hardware combination at 1080p and you will see more of a difference (the 5960x pulling ahead). ;)

 

"Everyone freaking out, claiming this is BS" are only saying the exact same thing you just said. Put the ego down for a second and pay attention.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think you understand. There are multiple ways to calculate frame times, the way we do is by averaging the lowest 99 percent (1% low) and 99.9 (0.1% low) of frames, the results are displayed in FPS or milliseconds but are interchangeable as they mean exactly the same thing. We use FPS as it fits better in the graphs. Lower FPS (higher frame time) is bad, anything below 30 FPS is definitely noticeable stutter (or micro-stutter) and the lower down you go the more noticeable it is. Some users may be more sensitive to this and even notice it at higher numbers. You also generally don't want to see too much of a gap between these numbers (44 and 40 for instance is relatively good, depending on your average FPS while 50 and 22 is not). 

 

 

EDIT: What Donny said.

 

No. I know what a frame time is. You and most people in this thread are not understanding. Those numbers are not frame times. They are Variance. as in the difference between a norm and an outlier. You can't switch between frame time variance and FPS variance the same way you switch between a frame time and FPS.

 

The maker of that graph made a mistake by putting FPS variance on a bar beside FPS as if they were comparable values. He likely thought they were average frame times.

 

 

http://www.tomshardware.com/reviews/gaming-processor-frame-rate-performance,3427-2.html

 

"In addition, raw frame times aren't the end-all in performance analysis because high frame rates have low corresponding frame times and low frame rates have high frame times. What we're trying to find is the variance, the amount of time that anomalous frames stray from the ideal norm."

 

They Don’t Think It Be Like It Is, But It Do

Link to comment
Share on other sites

Link to post
Share on other sites

"Everyone freaking out, claiming this is BS" are only saying the exact same thing you just said. Put the ego down for a second and pay attention.

 

Lol, ego? Wat? 

 

And no, a lot of people were freaking out because of the "AMD could NEVER match Intel, regardless of the situation" mentality. 

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

In every frame time variance graph I have ever seen, whether it be in ms or FPS, the lower the variance the better. As in a 10ms variance is better than 20, 20 is better than 50. The ms variance converted to FPS, so far as I have seen, is taking the ms and directly massaging it by 1000 to get a FPS ratio. Lower FPS indicating higher ms variance. What the graph shows, in an FPS scale, is that at those percentiles the variance went from mid teens in variance measured in ms up to as high a variance as 50 or 60ms. NOT a good spread.

 

Although I could be wrong, I don't mess with FRAPS/FRAFS and have not dealt with its FTV readings.

Link to comment
Share on other sites

Link to post
Share on other sites

Lol, ego? Wat? 

 

And no, a lot of people were freaking out because of the "AMD could NEVER match Intel, regardless of the situation" mentality. 

 

AMD can never match a 5960X with any CPU they have so far released. Underhand tactics are needed to make them appear equal, such as you yourself (and everyone else in this thread) have exhaustively explained.

 

And yes, to be patronising and condescending to a bunch of people in this thread so far before repeating the exact same thing you are being disdainful about, yeah that's pretty egotistical and self absorbed.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD can never match a 5960X with any CPU they have so far released. Underhand tactics are needed to make them appear equal, such as you yourself (and everyone else in this thread) have exhaustively explained.

 

And yes, to be patronising and condescending to a bunch of people in this thread so far before repeating the exact same thing you are being disdainful about, yeah that's pretty egotistical and self absorbed.

 

I see you missed the part where I said "regardless of the situation". ;) I'm sure you know what "GPU-bound" means. As you said, many others have already explained this. 

 

I'm simply trying to tell it like it is, that's all. I'm sorry if you saw that as being patronizing or condescending. It's just that I see this all the time, people all saying the same thing based on initial reaction rather than first analyzing at the facts of the situation.

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

I see you missed the part where I said "regardless of the situation". ;) I'm sure you know what "GPU-bound" means. As you said, many others have already explained this. 

 

I'm simply trying to tell it like it is, that's all. I'm sorry if you saw that as being patronizing or condescending. It's just that I see this all the time, people all saying the same thing based on initial reaction rather than first analyzing at the facts of the situation.

 

I saw where you said "regardless of the situation". I don't consider a situation in which the CPU is not actually being used due to GPU limitations any more relevant than the situation of the CPU not being used due to being locked in a drawer, or dropped in the ocean. A CPU is not magically better just because something else in the machine is worse.

Link to comment
Share on other sites

Link to post
Share on other sites

4K is very gpu bound.

 

You can make a pentium dual core get the same results as an i7 at that resolution .

 

 

That doesn't explain the low minimums on the 5960x

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

snip!

HAHAHA WOW amazing !!! now i need an FX-8370 to replace that PSO of an i7-4770K in there...MY GOD!

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

snip

 

snip!

 

snip!

you guys don't really think any of these numbers make sense won't ya?!

| CPU: Core i7-8700K @ 4.89ghz - 1.21v  Motherboard: Asus ROG STRIX Z370-E GAMING  CPU Cooler: Corsair H100i V2 |
| GPU: MSI RTX 3080Ti Ventus 3X OC  RAM: 32GB T-Force Delta RGB 3066mhz |
| Displays: Acer Predator XB270HU 1440p Gsync 144hz IPS Gaming monitor | Oculus Quest 2 VR

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

you guys don't really think any of these numbers make sense won't ya?!

I don't....and I am an FX-8350 user FFS.

I love the chip but I also know its strengths and limitations. There is no way on Earth a 5960X would get beaten by an FX(8350, 8370, 9590) in anything...other than price.

MARS_PROJECT V2 --- RYZEN RIG

Spoiler

 CPU: R5 1600 @3.7GHz 1.27V | Cooler: Corsair H80i Stock Fans@900RPM | Motherboard: Gigabyte AB350 Gaming 3 | RAM: 8GB DDR4 2933MHz(Vengeance LPX) | GPU: MSI Radeon R9 380 Gaming 4G | Sound Card: Creative SB Z | HDD: 500GB WD Green + 1TB WD Blue | SSD: Samsung 860EVO 250GB  + AMD R3 120GB | PSU: Super Flower Leadex Gold 750W 80+Gold(fully modular) | Case: NZXT  H440 2015   | Display: Dell P2314H | Keyboard: Redragon Yama | Mouse: Logitech G Pro | Headphones: Sennheiser HD-569

 

Link to comment
Share on other sites

Link to post
Share on other sites

That doesn't explain the low minimums on the 5960x

They aren't lower minimums, they are lower variances, which are better. The 5960x is actually winning in that graph, OP and author just said the wrong thing because they misunderstood the numbers.

 

In every frame time variance graph I have ever seen, whether it be in ms or FPS, the lower the variance the better. As in a 10ms variance is better than 20, 20 is better than 50. The ms variance converted to FPS, so far as I have seen, is taking the ms and directly massaging it by 1000 to get a FPS ratio. Lower FPS indicating higher ms variance. What the graph shows, in an FPS scale, is that at those percentiles the variance went from mid teens in variance measured in ms up to as high a variance as 50 or 60ms. NOT a good spread.

 

Although I could be wrong, I don't mess with FRAPS/FRAFS and have not dealt with its FTV readings.

 

That isn't a thing because ms of varience doesn't relate to a unit. you cant go from milliseconds per nothing to frames per second. you would end up with nothings per second.

 

variance is variance.

Link to comment
Share on other sites

Link to post
Share on other sites

I saw where you said "regardless of the situation". I don't consider a situation in which the CPU is not actually being used due to GPU limitations any more relevant than the situation of the CPU not being used due to being locked in a drawer, or dropped in the ocean. A CPU is not magically better just because something else in the machine is worse.

See below.

 

I don't....and I am an FX-8350 user FFS.

I love the chip but I also know its strengths and limitations. There is no way on Earth a 5960X would get beaten by an FX(8350, 8370, 9590) in anything...other than price.

Here's the thing... This is a heavily GPU-bound situation, not a CPU-bound situation. There is some explanation for why the results show what they show, but it's not an impossibility simply because the 5960x is supposedly "the be-all and end-all of CPUs". It could be some system or software-related factor, leading to these results. We don't know. When the workload emphasis is not on the CPU, other factors can come into play. But it doesn't necessarily mean "this CPU beats that CPU." It's not so black and white. 

 

PCper did a really interesting comparison using a wide array of CPU's and GPU's. If you have some time, it's a good read: 

http://www.pcper.com/reviews/Systems/Quad-Core-Gaming-Roundup-How-Much-CPU-Do-You-Really-Need

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

They aren't lower minimums, they are lower variances, which are better. The 5960x is actually winning in that graph, OP and author just said the wrong thing because they misunderstood the numbers.

 

 

That isn't a thing because ms of varience doesn't relate to a unit. you cant go from milliseconds per nothing to frames per second. you would end up with nothings per second.

 

variance is variance.

Variance is in milliseconds, there are, BY DEFINITION, 1000 milliseconds per second, so the variance of frame timings, in milliseconds, can be converted to FPS. As I said, I have not messed with FRAPS/FRAFS so I do not know how they are actually deriving their end results but lower ms variance equates to HIGHER FPS, LOWER FPS equates to HIGHER variance.

 

EDIT: I know you CAN display frame time variance in FPS as a reflection of the difference of in game FPS between the timings, but that seems less useful to me as different average FPS will have massively different performance impacts from "10 FPS" of variance when calculated that way, whereas FPS variance calculated from the gross ms result will be consistent no matter the averages. a 100FPS average with a 10FPS variance is similar performance hit to a 50FPS average with a 5FPS variance, but if you use the gross millisecond variance and convert that to FPS you get a fuller picture of the full variance than one weighted to the average.

Link to comment
Share on other sites

Link to post
Share on other sites

Variance is in milliseconds, there are, BY DEFINITION, 1000 milliseconds per second, so the variance of frame timings, in milliseconds, can be converted to FPS. As I said, I have not messed with FRAPS/FRAFS so I do not know how they are actually deriving their end results but lower ms variance equates to HIGHER FPS, LOWER FPS equates to HIGHER variance.

 

EDIT: I know you CAN display frame time variance in FPS as a reflection of the difference of in game FPS between the timings, but that seems less useful to me as different average FPS will have massively different performance impacts from "10 FPS" of variance when calculated that way, whereas FPS variance calculated from the gross ms result will be consistent no matter the averages. a 100FPS average with a 10FPS variance is similar performance hit to a 50FPS average with a 5FPS variance, but if you use the gross millisecond variance and convert that to FPS you get a fuller picture of the full variance than one weighted to the average.

 

Honestly. google units of variance. it would take you a minute. this is the first thing on the page.

 

"Because the differences are squared, the units of variance are not the same as the units of the data."

 

Variance is a % difference. you can't switch between them the same way you can with frame times. I never said that you can't have a variance  of fps. its just that its not obtained by 1000/(variance of frame time)

 

To help you understand I made some graphs.

 

Scenario 1

 

Frame times:

S1FT

FPS:

S1FPS

 

Scenario 2

 

Frame times:

S2FT

FPS:

S2FPS

 
 
As you can see, the first scenario has greater variance in both the FPS and the Frame times. Because the Variability, the amount that the points change. is larger no matter what units you use.
 
And in the second scenario the variance is lower in both graphs.
 
Can you see that if there is a larger difference between two frame times, those two frames will also have a larger difference in fps? That is what variance measures. less variance is better no matter what units you use.
Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

 

EDIT: Looking back through the article, especially the 1080p testing I have to assume the delta is being converted from ms to FPS because at 1080p the AMD CPU has the lower FPS rating in variance, so either the AMD is trouncing the Intel in the CPU bound instances or the metric is being derived in a manner where the higher FPS reading is the lower variance.

 

I'm not sure what you think those are illustrating? Those are straight up what I used as an example in my earlier post.

 

As a ratio of the overall performance a hard number FPS or ms variance is going to be most noticeable the larger the portion of the original flow it represents. 10 frames or 10 ms variance is more noticeable when the original flow it is interfering with is a lower ratio to the variance. Whether its FPS or ms, the same raw variance is a bigger disturbance in the lower flow rate. If it is only the delta in frames from flow to lowest FPS its not nearly as big an issue as if its a conversion of the variance from ms to FPS. if you have 100FPS flow with a direct variance of 16FPS that is literally a 16% variance in the flow, if the raw ms is 65 ms and you convert that to ~16FPS its a bigger deal than a raw 16FPS delta. raw 16FPS delta means the slowest frametime was 84FPS raw flow. I do not know which method they are using, or conversions they are making I have to rely on the people who put out the numbers to give me the calculations and methods they used. If its a raw FPS variance its not a massive concern to me. If that is representative of a raw ms variance converted to FPS it concerns me more.

 

Either way if its RAW FPS variance its still a matter of ratio more than the raw number, so lower is only "better" in relation to its own derivation, not to other flow rates with their own variances. Is a 16FPS variance better than a 20FPS variance? not if the flow rate with the 16FPS variance was an average of 32FPS and the 20 FPS variance was from an averaged flow of 100FPS. Not to my eye anyways. But, I've always hated massaged numbers, whether its finance or FPS. Normalized deltas, compensated flow rates, ratios vs raw numbers, etc. I'd prefer having the original raw data and the derived outputs together for comparison.

 

One's a descriptive statement one is normative. Lower variances are lower, better or worse is dependent on more than that single number. reducing the variance of a workflow from 20FPS to 10FPS is better, two different workflows one with 20 one with 10 has no normative argument from just that info.

 

So the big question is what metric is being represented, as the number we are given is smaller when going from 1% to the 0.1% where variance should be going up it seems to indicate that they are frames per second derivations of increasing ms readings.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×