Jump to content

[QUADRO] Linux vs Windows | OpenGL vs DirectX | Unigine Valley benchmarked!

Troyanac

Oldschool resolutions incoming, beware! :P

 

So, i benchmarked my Lenovo T61 running both Windows 7 and Fedora 22, both with latest drivers from NVIDIA, in Unigine Valley. 

Some detailed stuffs:

Intel Core2Duo T7500 2.2GHz

3GB SODIMM DDR2

NVIDIA Quadro NVS 140M {

 

16 Cuda cores
DirectX 11

400MHz core clock

800MHz Shader clock

1200MHz memory clock, GDDR3, 128MB of it

 

}

 

Windows 7, driver 341.33 Performance driver

Linux 4.2.3, standard 344.31 driver

All settings set to lowest possible, no AA, VGA resolution.

 

All was done on proprietary NVIDIA drivers, because Noveau is desperate.

 

Linux 4.2.3 OpenGL:

 

ov8tjUs.png

Windows 7 OpenGL:

 

81d03w3.png

 

Windows 7 DirectX 11:

 

2Cfqjol.png

 

Windows 7 DirectX 9:

 

XRqyx1r.png

 

 

All testing was done in 1 pass, as my eyes were bleeding from the framerate.

Conclusion:
 

As expected, there is that always-present performance gap of around 20% between Linux and Windows, and between OpenGL and DirectX respectively. 
 

 

Also, games i played {

DiRT 3 - Windows 7 - DirectX9 - ~30fps 1024x768 - Lowest / No AA

CSGO - Windows 7 - DirectX9 - 35-40 fps 640x480 - Lowest / No AA
Dishonored - Windows 7 ' DirectX 10 - 20-25 fps 640x480 - Lowest / No AA
CS 1.6 - Windows 7 - OpenGL - ~80fps+ 800x600 - Lowest / No AA
CS 1.6 - Windows 7 - D3D - 100fps+ 800x600 - Lowest / No AA
CS 1.6 - Linux 4.2.3, Wine 1.7.55 - OpenGL - ~80fps 800x600 - Lowest / No AA
AC4: Black Flag - Windows 7 - DirectX10 - ~10fps - 640x480 - Lowest / No AA
Left4Dead 2 - Windows 7 - DirectX9 - ~40fps - 1680x1050 - Lowest / No AA < Suprises me a lot, looks like Source loves Quadros
Insurgency [bETA] - Linux 4.2.3 - OpenGL - Slideshow - 640x480 - Lowest / No AA

 

}

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know about windows and linux for frame rates but DirectX and OpenGL frame rate differences are really entirely dependent on implementation of the rendering engine. You can't really conclusively determine which API is better purely through FPS numbers. Because there are many little things with the usage of each API that can kill performance, or even it could be a shitty implementation by the graphics card manufacturer (for OGL).

 

OpenGL is giant piece of shit in my opinion from working with it but FPS really isn't a fair determining factor since there are so many variables dependent purely on implementations.

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

ALL DEM CUDA CORES!!

4690K // 212 EVO // Z97-PRO // Vengeance 16GB // GTX 770 GTX 970 // MX100 128GB // Toshiba 1TB // Air 540 // HX650

Logitech G502 RGB // Corsair K65 RGB (MX Red)

Link to comment
Share on other sites

Link to post
Share on other sites

snippity

I know, OpenGL is pretty bad, but im just showcasing end-user framerates :P

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×