Jump to content

Huge FPS gains on PCIe 3.0 vs 2.0 @ 1080p in rFactor 2

Source;

http://isiforums.net/f/showthread.php/22009-Massive-FPS-gains-in-rf2-using-PCI-e-3-0-x16-with-higher-end-cards!

 

Is this story an Intel plant hoping this will go viral in an effort to force users off Sandy Bridge? Will AMD be forced to 'rush' a chipset supporting PCIe 3.0 to market?!

 

The general consensus up until now is that there are almost negliable performance gains running GPUs over PCIe 3.0 x16 versus x8 / PCIe 2.0 x16, especially at lower resolutions.

 

TechAde on the rFactor 2 forum however has pointed the finger at the PCIe bus as to the reason a performance disparity exists benchmarking the game on identical GPUs.

 

And he appears to be correct?! I've posted MrPix's benchmarking results below;

 

PCIe 2 vs 3..... with desktop and game at 1080p and scaling on Display not GPU...

GPU: EVGA GTX 980 SC OC #1 (1550MHz, Mem 1900MHz, TDP < 88%)

PCIe 2.0 @ x16
Time: 67875ms - Avg: 145.655 - Min: 113 - Max: 176

PCIe 3.0 @ x16
Time: 67627ms - Avg: 187.975 - Min: 153 - Max: 216

so a 29% average increase, 22% max, and the most important ... 35% min increase on a single card!

 

Pasted from;

http://isiforums.net/f/showthread.php/21983-Live-Performance-Benchmarking-Comparison-for-rFactor-2?p=311380&viewfull=1#post311380

 

DrR1pper reports his GTX 970 suffers a 12% performance drop due to PCIe 2.0 x16 - thanks to his Sandy Bridge processor.

 

But don't take their word for it, run the benchmark for yourselves! Download the free demo, all the settings you need are here;

http://isiforums.net/f/showthread.php/21983-Live-Performance-Benchmarking-Comparison-for-rFactor-2

 

Credit to TechAde, DrR1pper and MrPix on the ISI Forum!

Link to comment
Share on other sites

Link to post
Share on other sites

-

 

Welcome to the forum! Please remember to follow your topics so you're notified when someone answers ;) in my sig you can find links to the CoC and the forum's F.A.Q. check them out!

 

To be honest when we're talking about values over 140 fps it doesn't really matter... probably the pcie 2.0 bus limits the number of output frames per second but the value of this limit must be over 120. Basically, the amout of frames the card can push out is reduced because it has to wait for the information to come through in order to calculate each frame, but under 120fps the time it has to wait is very likely lower than the time it takes to calculate a new frame. So, don't worry about it, it doesn't really matter. Even pcie 2.0 x4 is still enough to drive pretty much any card, the performance loss really isn't that high.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

I noticed a complete and total difference when I moved from my fm2 pci-e 2.0 AMD board and APU to a lga1150 pci-e 3.0 board. Everyone tells me I'm crazy and that it doesn't make a difference, but I did notice one. Literally the only things that changed were the board and cpu, same graphics card, same monitor, same everything else.

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

I'm pretty sure it's his methodology of testing or board that's flawed because NO game @ 1080p/1440p (as of now, star citizen might change that) needs more bandwidth than 2.0 @ x8 and there are plenty of benchmarks to back this up so this is either bad testing or fake.

Mein Führer... I CAN WALK !!

Link to comment
Share on other sites

Link to post
Share on other sites

I noticed a complete and total difference when I move from my fm2 pci-e 2.0 AMD board and APU to a lga1150 pci-e 3.0 board. Everyone tells me I'm crazy and that it doesn't make a difference, but I did notice one.

That's probably the IPC increase your noticing not the bandwidth.

Mein Führer... I CAN WALK !!

Link to comment
Share on other sites

Link to post
Share on other sites

That's probably the IPC increase your noticing not the bandwidth.

Hate to sound noobish, but what's IPC? I'm just saying I did notice an extreme difference, I can now play things on high/ultra without severe choppiness, even with the same gpu.

 

I'm pretty sure it's his methodology of testing or board that's flawed because NO game @ 1080p/1440p (as of now, star citizen might change that) needs more bandwidth than 2.0 @ x8 and there are plenty of benchmarks to back this up so this is either bad testing or fake.

Source?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

Hate to sound noobish, but what's IPC? I'm just saying I did notice an extreme difference, I can now play things on high/ultra without severe choppiness, even with the same gpu.

 

IPC = Instructions per clock. In general, Intel's CPUs are capable of approximately twice the IPC per core vs. AMD CPUs. 

 

This is partly why higher-end GPUs are held back (bottlenecked) by lower-end CPUs. The CPU IPC just can't keep up with how fast the GPU can run at. 

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'd like to believe that, but ONLY if there are more than one benchmarks, and when it is something more demanding that wont get hundreds of fps.

I can image the PSIe bus throttling over 200 fps, but what about when its 60fps or less? Then there should not be any difference at all.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

IPC = Instructions per clock. In general, Intel's CPUs are capable of approximately twice the IPC per core vs. AMD CPUs. 

 

This is partly why higher-end GPUs are held back (bottlenecked) by lower-end CPUs. The CPU IPC just can't keep up with how fast the GPU can run at. 

Agreed, but Not just the GPU ability to run fast,.... but the Game window & for those that didn't think of it... Windows random background data, Driver linkages, The Game Client Window behind the scenes stuff you don't see while it's rendering and playing your game.

More IPC the Better.

Maximums - Asus Z97-K /w i5 4690 Bclk @106.9Mhz * x39 = 4.17Ghz, 8GB of 2600Mhz DDR3,.. Gigabyte GTX970 G1-Gaming @ 1550Mhz

 

Link to comment
Share on other sites

Link to post
Share on other sites

IPC = Instructions per clock. In general, Intel's CPUs are capable of approximately twice the IPC per core vs. AMD CPUs. 

 

This is partly why higher-end GPUs are held back (bottlenecked) by lower-end CPUs. The CPU IPC just can't keep up with how fast the GPU can run at. 

oooooooooh, would that cause choppy framerates?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

seems fishy, needs more benches by different people, this may be good for a linus video considering he has all the 980 and 970 out there, well almost

this is one of the greatest thing that has happened to me recently, and it happened on this forum, those involved have my eternal gratitude http://linustechtips.com/main/topic/198850-update-alex-got-his-moto-g2-lets-get-a-moto-g-for-alexgoeshigh-unofficial/ :')

i use to have the second best link in the world here, but it died ;_; its a 404 now but it will always be here

 

Link to comment
Share on other sites

Link to post
Share on other sites

seems fishy, needs more benches by different people, this may be good for a linus video considering he has all the 980 and 970 out there, well almost

@LinusTech @Slick

 

I for one would like to see this, not to mention see the difference with the same system, except for an fm2 apu and a 4690k. Note I was running a gtx660 on both boards. (yes I know an apu is pointless with an Nvidia GPU, but the only dual graphics gpu for my apu wasn't as good as a 660)

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

I'd expect the consistent FPS to be very similar, but the PCIe 2.0 scenario hitting a lower minimum FPS during heavy loading/ purging of textures.

 

IE, after loading/ asset streaming, once the frame rate stabilizes, I don't think there'll be much difference.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

oooooooooh, would that cause choppy framerates?

85% I think it would and probably stuttering too, but maybe I'm wrong.

  ﷲ   Muslim Member  ﷲ

KennyS and ScreaM are my role models in CSGO.

CPU: i3-4130 Motherboard: Gigabyte H81M-S2PH RAM: 8GB Kingston hyperx fury HDD: WD caviar black 1TB GPU: MSI 750TI twin frozr II Case: Aerocool Xpredator X3 PSU: Corsair RM650

Link to comment
Share on other sites

Link to post
Share on other sites

Welcome to the forum! ...To be honest when we're talking about values over 140 fps it doesn't really matter...

 

Many thanks and agreed, those are some pretty high numbers. The minium FPS DrR1pper reported for his affected GTX 970 is a little more relevant I guess? Although 12% isn't as much of a drop in performance?

 

GPU: MSI GTX 970 Gaming 4G (manual overclock) - 1489 core/1847 mem (+311/+95)

Time: 67564ms - Min: 108 - Max: 165 - Avg: 138.121

 

Note, I don't know whose other GTX 970 he's comparing his results with?

 

If you could hammer the card further (like enabling ShadowPlay for instance?) plus utilise one or two extra PCIe cards, could it spell more danger I wonder?

Link to comment
Share on other sites

Link to post
Share on other sites

I haven't looked at the threads or anything but is he using the exact same test platform (same cpu, mobo, etc...) if so how did he switch from PCIe 3.0 to PCIe 2.0?

 

It sounds to me like the test was done on a different CPU thus explains the difference in performance.

Yeah, we're all just a bunch of idiots experiencing nothing more than the placebo effect.
Link to comment
Share on other sites

Link to post
Share on other sites

I'm doubting this.

Main rig on profile

VAULT - File Server

Spoiler

Intel Core i5 11400 w/ Shadow Rock LP, 2x16GB SP GAMING 3200MHz CL16, ASUS PRIME Z590-A, 2x LSI 9211-8i, Fractal Define 7, 256GB Team MP33, 3x 6TB WD Red Pro (general storage), 3x 1TB Seagate Barracuda (dumping ground), 3x 8TB WD White-Label (Plex) (all 3 arrays in their respective Windows Parity storage spaces), Corsair RM750x, Windows 11 Education

Sleeper HP Pavilion A6137C

Spoiler

Intel Core i7 6700K @ 4.4GHz, 4x8GB G.SKILL Ares 1800MHz CL10, ASUS Z170M-E D3, 128GB Team MP33, 1TB Seagate Barracuda, 320GB Samsung Spinpoint (for video capture), MSI GTX 970 100ME, EVGA 650G1, Windows 10 Pro

Mac Mini (Late 2020)

Spoiler

Apple M1, 8GB RAM, 256GB, macOS Sonoma

Consoles: Softmodded 1.4 Xbox w/ 500GB HDD, Xbox 360 Elite 120GB Falcon, XB1X w/2TB MX500, Xbox Series X, PS1 1001, PS2 Slim 70000 w/ FreeMcBoot, PS4 Pro 7015B 1TB (retired), PS5 Digital, Nintendo Switch OLED, Nintendo Wii RVL-001 (black)

Link to comment
Share on other sites

Link to post
Share on other sites

I haven't looked at the threads or anything but is he using the exact same test platform (same cpu, mobo, etc...) if so how did he switch from PCIe 3.0 to PCIe 2.0?

 

It sounds to me like the test was done on a different CPU thus explains the difference in performance.

You can switch PCIE speeds within some bios's, I have it manually on PCIE3, otherwise it's defaults to AUTO,I have to choose PCIE 1.0/2.0/3.0 from a drop down list.

Maximums - Asus Z97-K /w i5 4690 Bclk @106.9Mhz * x39 = 4.17Ghz, 8GB of 2600Mhz DDR3,.. Gigabyte GTX970 G1-Gaming @ 1550Mhz

 

Link to comment
Share on other sites

Link to post
Share on other sites

I dont believe this for a moment. When i went from pci-e 2.0 x16 to pci-e 3.0 x16 with my SLI Titans I saw no difference. Furthermore switching from pci-e 3.0 x16 to pci-e 3.0 x8 showed no difference.

Main Rig: http://linustechtips.com/main/topic/58641-the-i7-950s-gots-to-go-updated-104/ | CPU: Intel i7-4930K | GPU: 2x EVGA Geforce GTX Titan SC SLI| MB: EVGA X79 Dark | RAM: 16GB HyperX Beast 2400mhz | SSD: Samsung 840 Pro 256gb | HDD: 2x Western Digital Raptors 74gb | EX-H34B Hot Swap Rack | Case: Lian Li PC-D600 | Cooling: H100i | Power Supply: Corsair HX1050 |

 

Pfsense Build (Repurposed for plex) https://linustechtips.com/main/topic/715459-pfsense-build/

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I haven't looked at the threads or anything but is he using the exact same test platform (same cpu, mobo, etc...) if so how did he switch from PCIe 3.0 to PCIe 2.0?

 

It sounds to me like the test was done on a different CPU thus explains the difference in performance.

 

Same system/components for MrPix;

 

I'm on I7-3820... I have to force PCIe3.0

 

100% sure on 3.0 @ x16 (after applying with force gen3 executable as admin, rebooting and checking in AIDA64 Engineers edition!

 

I believe he's referring to this patch?

http://nvidia.custhelp.com/app/answers/detail/a_id/3135/~/geforce-gen3-support-on-x79-platform

Link to comment
Share on other sites

Link to post
Share on other sites

Agreed, but Not just the GPU ability to run fast,.... but the Game window & for those that didn't think of it... Windows random background data, Driver linkages, The Game Client Window behind the scenes stuff you don't see while it's rendering and playing your game.

More IPC the Better.

True, but when I look at my CPU usage at idle (CPU at ~800-1000mhz) it's only maybe 3-5% usage, which isn't much. Mind you, I'm running an i5-4570 so perhaps system idle CPU usage on a CPU with half the IPC would be more like 6-10%? That could make a bit of a difference while gaming.

 

oooooooooh, would that cause choppy framerates?

It could but only if your CPU is at or near 100% usage. I see you're running a GTX 660 which should not have been bottlenecked by the APU you had before. The i5's IPC is just that much stronger and that is probably the difference you're experiencing. A GTX 660 is nowhere near close to saturating a PCIe 2.0. 

 

 

As for the topic; I take it with a very tiny grain of salt until more information and testing can show otherwise. 

 

Linus and Slick... This would make for an interesting myth-buster type video. :)

My Systems:

Main - Work + Gaming:

Spoiler

Woodland Raven: Ryzen 2700X // AMD Wraith RGB // Asus Prime X570-P // G.Skill 2x 8GB 3600MHz DDR4 // Radeon RX Vega 56 // Crucial P1 NVMe 1TB M.2 SSD // Deepcool DQ650-M // chassis build in progress // Windows 10 // Thrustmaster TMX + G27 pedals & shifter

F@H Rig:

Spoiler

FX-8350 // Deepcool Neptwin // MSI 970 Gaming // AData 2x 4GB 1600 DDR3 // 2x Gigabyte RX-570 4G's // Samsung 840 120GB SSD // Cooler Master V650 // Windows 10

 

HTPC:

Spoiler

SNES PC (HTPC): i3-4150 @3.5 // Gigabyte GA-H87N-Wifi // G.Skill 2x 4GB DDR3 1600 // Asus Dual GTX 1050Ti 4GB OC // AData SP600 128GB SSD // Pico 160XT PSU // Custom SNES Enclosure // 55" LG LED 1080p TV  // Logitech wireless touchpad-keyboard // Windows 10 // Build Log

Laptops:

Spoiler

MY DAILY: Lenovo ThinkPad T410 // 14" 1440x900 // i5-540M 2.5GHz Dual-Core HT // Intel HD iGPU + Quadro NVS 3100M 512MB dGPU // 2x4GB DDR3L 1066 // Mushkin Triactor 480GB SSD // Windows 10

 

WIFE'S: Dell Latitude E5450 // 14" 1366x768 // i5-5300U 2.3GHz Dual-Core HT // Intel HD5500 // 2x4GB RAM DDR3L 1600 // 500GB 7200 HDD // Linux Mint 19.3 Cinnamon

 

EXPERIMENTAL: Pinebook // 11.6" 1080p // Manjaro KDE (ARM)

NAS:

Spoiler

Home NAS: Pentium G4400 @3.3 // Gigabyte GA-Z170-HD3 // 2x 4GB DDR4 2400 // Intel HD Graphics // Kingston A400 120GB SSD // 3x Seagate Barracuda 2TB 7200 HDDs in RAID-Z // Cooler Master Silent Pro M 1000w PSU // Antec Performance Plus 1080AMG // FreeNAS OS

 

Link to comment
Share on other sites

Link to post
Share on other sites

This isnt a new topic. Almost every tech site has benchmarks for this very issue.

http://www.pugetsystems.com/labs/articles/Impact-of-PCI-E-Speed-on-Gaming-Performance-518/

Main Rig: http://linustechtips.com/main/topic/58641-the-i7-950s-gots-to-go-updated-104/ | CPU: Intel i7-4930K | GPU: 2x EVGA Geforce GTX Titan SC SLI| MB: EVGA X79 Dark | RAM: 16GB HyperX Beast 2400mhz | SSD: Samsung 840 Pro 256gb | HDD: 2x Western Digital Raptors 74gb | EX-H34B Hot Swap Rack | Case: Lian Li PC-D600 | Cooling: H100i | Power Supply: Corsair HX1050 |

 

Pfsense Build (Repurposed for plex) https://linustechtips.com/main/topic/715459-pfsense-build/

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×