Jump to content

First DirectX 12 game benchmarked *Update 2 More benchmarks

Aaaaaaa I'm selling my 970 for a 390x.

 

Enjoy playing DX12 games... oh wait.

i7 9700K @ 5 GHz, ASUS DUAL RTX 3070 (OC), Gigabyte Z390 Gaming SLI, 2x8 HyperX Predator 3200 MHz

Link to comment
Share on other sites

Link to post
Share on other sites

It's still too early to get hyped over dx12, but things do look promising for AMD. Their lack of highly optimized drivers is becoming a non-issue...for now. It remains unclear if Nvidia will pull back head in the future as they get more time with the new api, or if dx12 works so closely with the hardware that it might really be a somewhat level playing field again driver-wise.

Link to comment
Share on other sites

Link to post
Share on other sites

Nice to see that a 290X can stay relevant with DX12, I hope this stay true to other DX12 games!

 

I own a 290x and it is serving me well so far.

Looking around,

bla bla bla this, meh that!

Link to comment
Share on other sites

Link to post
Share on other sites

Such a good explanation of Nvidia and AMD DX12 performance. Pretty much confirms why AMD have been pushing Low-Level API's for years, and Why Nvidia are complaining about the results.

 

http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/400#post_24321843

 

Well I figured I'd create an account in order to explain away what you're all seeing in the Ashes of the Singularity DX12 Benchmarks. I won't divulge too much of my background information but suffice to say that I'm an old veteran who used to go by the handle ElMoIsEviL.
 
First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that's an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here's why...
 
Maxwell's Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor. See bellow:
 
 
Each ACE can also apply certain Post Processing Effects without incurring much of a performance penalty. This feature is heavily used for Lighting in Ashes of the Singularity. Think of all of the simultaneous light sources firing off as each unit in the game fires a shot or the various explosions which ensue as examples.
 
 
This means that AMDs GCN 1.1/1.2 is best adapted at handling the increase in Draw Calls now being made by the Multi-Core CPU under Direct X 12.
 
Therefore in game titles which rely heavily on Parallelism, likely most DirectX 12 titles, AMD GCN 1.1/1.2 should do very well provided they do not hit a Geometry or Rasterizer Operator bottleneck before nVIDIA hits their Draw Call/Parallelism bottleneck. The picture bellow highlights the Draw Call/Parallelism superioty of GCN 1.1/1.2 over Maxwell 2:
 
 
A more efficient queueing of workloads, through better thread Parallelism, also enables the R9 290x to come closer to its theoretical Compute figures which just happen to be ever so shy from those of the GTX 980 Ti (5.8 TFlops vs 6.1 TFlops respectively) as seen bellow:
 
 
What you will notice is that Ashes of the Singularity is also quite hard on the Rasterizer Operators highlighting a rather peculiar behavior. That behavior is that an R9 290x, with its 64 Rops, ends up performing near the same as a Fury-X, also with 64 Rops. A great way of picturing this in action is from the Graph bellow (courtesy of Beyond3D):
 
 
As for the folks claiming a conspiracy theory, not in the least. The reason AMDs DX11 performance is so poor under Ashes of the Singularity is because AMD literally did zero optimizations for the path. AMD is clearly looking on selling Asynchronous Shading as a feature to developers because their architecture is well suited for the task. It doesn't hurt that it also costs less in terms of Research and Development of drivers. Asynchronous Shading allows GCN to hit near full efficiency without even requiring any driver work whatsoever.
 
nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11. In this graph, provided by Anandtech, you have the correct figures for nVIDIAs architectures (from Kepler to Maxwell 2) though the figures for GCN are incorrect (they did not multiply the number of Asynchronous Compute Engines by 8):
 
 
People wondering why Nvidia is doing a bit better in DX11 than DX12. That's because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia's driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.
 
But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel... in comes in Mantle, Vulcan and Direct X 12.
 
People wondering why Fury-X did so poorly in 1080p under DirectX 11 titles? That's your answer.
 
 
A video which talks about Ashes of the Singularity in depth:
 
 
PS. Don't count on better Direct X 12 drivers from nVIDIA. DirectX 12 is closer to Metal and it's all on the developer to make efficient use of both nVIDIA and AMDs architectures.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

Even IF Maxwell has hit its "limit" in DX12, it will not matter much as by the time DX12 games become mainstream, Pascal and Volta (and the one after that) will be out.

This benchmark is interesting and all, but I'm seeing so many claims from people with not enough evidence to compare to. Not saying they're wrong or right, just saying one game does not make an endorsement. We have an Alpha stage game with a brand new API that itself was just released.

I can't take much credence into an Alpha build and conclude that everything has changed. If more games show similar results, that's a good indicator that it has certainly changed.

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

Even IF Maxwell has hit its "limit" in DX12, it will not matter much as by the time DX12 games become mainstream, Pascal and Volta (and the one after that) will be out.

This benchmark is interesting and all, but I'm seeing so many claims from people with not enough evidence to compare to. Not saying they're wrong or right, just saying one game does not make an endorsement. We have an Alpha stage game with a brand new API that itself was just released.

I can't take much credence into an Alpha build and conclude that everything has changed. If more games show similar results, that's a good indicator that it has certainly changed.

That won't be good news though to people who stay on Maxwell due to not having the budget to buy new GPU's :D Read the post below 

 

Such a good explanation of Nvidia and AMD DX12 performance. Pretty much confirms why AMD have been pushing Low-Level API's for years, and Why Nvidia are complaining about the results.

 

http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/400#post_24321843

 

Well I figured I'd create an account in order to explain away what you're all seeing in the Ashes of the Singularity DX12 Benchmarks. I won't divulge too much of my background information but suffice to say that I'm an old veteran who used to go by the handle ElMoIsEviL.
 
First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that's an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here's why...
 
Maxwell's Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor. See bellow:
 
 
Each ACE can also apply certain Post Processing Effects without incurring much of a performance penalty. This feature is heavily used for Lighting in Ashes of the Singularity. Think of all of the simultaneous light sources firing off as each unit in the game fires a shot or the various explosions which ensue as examples.
 
 
This means that AMDs GCN 1.1/1.2 is best adapted at handling the increase in Draw Calls now being made by the Multi-Core CPU under Direct X 12.
 
Therefore in game titles which rely heavily on Parallelism, likely most DirectX 12 titles, AMD GCN 1.1/1.2 should do very well provided they do not hit a Geometry or Rasterizer Operator bottleneck before nVIDIA hits their Draw Call/Parallelism bottleneck. The picture bellow highlights the Draw Call/Parallelism superioty of GCN 1.1/1.2 over Maxwell 2:
 
 
A more efficient queueing of workloads, through better thread Parallelism, also enables the R9 290x to come closer to its theoretical Compute figures which just happen to be ever so shy from those of the GTX 980 Ti (5.8 TFlops vs 6.1 TFlops respectively) as seen bellow:
 
 
What you will notice is that Ashes of the Singularity is also quite hard on the Rasterizer Operators highlighting a rather peculiar behavior. That behavior is that an R9 290x, with its 64 Rops, ends up performing near the same as a Fury-X, also with 64 Rops. A great way of picturing this in action is from the Graph bellow (courtesy of Beyond3D):
 
 
As for the folks claiming a conspiracy theory, not in the least. The reason AMDs DX11 performance is so poor under Ashes of the Singularity is because AMD literally did zero optimizations for the path. AMD is clearly looking on selling Asynchronous Shading as a feature to developers because their architecture is well suited for the task. It doesn't hurt that it also costs less in terms of Research and Development of drivers. Asynchronous Shading allows GCN to hit near full efficiency without even requiring any driver work whatsoever.
 
nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11. In this graph, provided by Anandtech, you have the correct figures for nVIDIAs architectures (from Kepler to Maxwell 2) though the figures for GCN are incorrect (they did not multiply the number of Asynchronous Compute Engines by 8):
 
 
People wondering why Nvidia is doing a bit better in DX11 than DX12. That's because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia's driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.
 
But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel... in comes in Mantle, Vulcan and Direct X 12.
 
People wondering why Fury-X did so poorly in 1080p under DirectX 11 titles? That's your answer.
 
 
A video which talks about Ashes of the Singularity in depth:
 
 
PS. Don't count on better Direct X 12 drivers from nVIDIA. DirectX 12 is closer to Metal and it's all on the developer to make efficient use of both nVIDIA and AMDs architectures.

Link to comment
Share on other sites

Link to post
Share on other sites

That won't be good news though to people who stay on Maxwell due to not having the budget to buy new GPU's :D Read the post below

That's very true. Frequent upgrades are par for the course when buying Nvidia. That's not meant as an inflammatory statement, but based on my own buying experience. Going from maxwell to Pascal will probably be similar to Fermi and Kepler, where Fermi emulated (did not natively support) some of the dx11 features, just like Maxwell only emulates some of the dx12 features. Granted, anyone with the money to buy a 980ti or titanX will upgrade to Pascal next fall so they can have the newest card, but like you say, not everyone can afford that kind of upgrade if they already bought Maxwell.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

INB4 GUYS! THE FURY X HAS ONLY BEEN RELEASED FOR TWO WEEKS LET AMD FINALIZE THE DRIVERS BECAUSE IT ISN'T A FAIR FIGHT ATM.

 

Seriously though, the biggest takeaways I see from this are nvidia ISN'T ready for DX12 (or at least not on this game.) and this game is a low thread count hog monster of a cpu stresser.

The 980ti had only been out for not much longer, so cant use that argument 

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

This is what you get when you give out your work to push the industry - Microsoft and Khronos using Mantle is working out for AMD, a API that favors their architecture :D

Link to comment
Share on other sites

Link to post
Share on other sites

The Fury X wasn't a slap in the face of nvidia when it was released, but after looking at this, the fight of the fury X against the Titan X might not be over yet!

Looking good for amd, looking bad for Nvidia. 

 

Freesync is going hard against G-Sync and might win the fight, and now this. Let's get on the rollercoaster because it looks like this is going to be one hell of a ride!

If you want my attention, quote meh! D: or just stick an @samcool55 in your post :3

Spying on everyone to fight against terrorism is like shooting a mosquito with a cannon

Link to comment
Share on other sites

Link to post
Share on other sites

The 980ti had only been out for not much longer, so cant use that argument

You completely missed what I was saying. But thank you for trying.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Guys..don't turn this topic into a 14page debate like the "FX 8370 vs i7 5960X 4K bench" topic.

 

The bottom line of this is that AMD is coming back...and it's ready to hit hard. :)

MARS_PROJECT V2 --- RYZEN RIG

Spoiler

 CPU: R5 1600 @3.7GHz 1.27V | Cooler: Corsair H80i Stock Fans@900RPM | Motherboard: Gigabyte AB350 Gaming 3 | RAM: 8GB DDR4 2933MHz(Vengeance LPX) | GPU: MSI Radeon R9 380 Gaming 4G | Sound Card: Creative SB Z | HDD: 500GB WD Green + 1TB WD Blue | SSD: Samsung 860EVO 250GB  + AMD R3 120GB | PSU: Super Flower Leadex Gold 750W 80+Gold(fully modular) | Case: NZXT  H440 2015   | Display: Dell P2314H | Keyboard: Redragon Yama | Mouse: Logitech G Pro | Headphones: Sennheiser HD-569

 

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder why this news post wasn't featured in the WAN show. It made the news on pretty much all the other similar shows this week. Oh well, glad the story did well on the forum :)

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder why this news post wasn't featured in the WAN show. It made the news on pretty much all the other similar shows this week. Oh well, glad the story did well on the forum :)

because his answer might be just like any other nvidia fanboy answer... its just one game.. its just alpha... this proves nothing... nvidia will fix it...etc.

AMD Rig - (Upgraded): FX 8320 @ 4.8 Ghz, Corsair H100i GTX, ROG Crosshair V Formula, Ghz, 16 GB 1866 Mhz Ram, Msi R9 280x Gaming 3G @ 1150 Mhz, Samsung 850 Evo 250 GB, Win 10 Home

(My first Intel + Nvidia experience  - recently bought ) : MSI GT72S Dominator Pro G ( i7 6820HK, 16 GB RAM, 980M SLI, GSync, 1080p , 2x128 GB SSD + 1TB HDD... FeelsGoodMan

Link to comment
Share on other sites

Link to post
Share on other sites

because his answer might be just like any other nvidia fanboy answer... its just one game.. its just alpha... this proves nothing... nvidia will fix it...etc.

*people who think rationally answer.

Lets remember first free-sync monitors. They were bad, so obviously free-sync is bad and g-sync is the way to go. But look what's happening right now.

Same thing may or may not occur here.

Laptop: Acer V3-772G  CPU: i5 4200M GPU: GT 750M SSD: Crucial MX100 256GB
DesktopCPU: R7 1700x GPU: RTX 2080 SSDSamsung 860 Evo 1TB 

Link to comment
Share on other sites

Link to post
Share on other sites

because his answer might be just like any other nvidia fanboy answer... its just one game.. its just alpha... this proves nothing... nvidia will fix it...etc.

 

Yeah. I mean they can choose whatever to talk about of course, but with all the nvidia hardware they've gotten for free over the last year or so (multiple titan blacks and titan x'), I must admit it concers me if they are indeed starting to become biased. I hope not, and I'm hopefully just overthinking it.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah. I mean they can choose whatever to talk about of course, but with all the nvidia hardware they've gotten for free over the last year or so (multiple titan blacks and titan x'), I must admit it concers me if they are indeed starting to become biased. I hope not, and I'm hopefully just overthinking it.

 

The overall news that was showcased on the wan-show was horrid. When I saw the amazon story being mentioned. I knew from then on it was going to be a bad show and not worth watching. From what I read about twitch comments, It was not far from the truth. To cut LTT some slack, it was meant as a viewer appreciation show and not really about tech.  

Test ideas by experiment and observation; build on those ideas that pass the test, reject the ones that fail; follow the evidence wherever it leads and question everything.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah. I mean they can choose whatever to talk about of course, but with all the nvidia hardware they've gotten for free over the last year or so (multiple titan blacks and titan x'), I must admit it concers me if they are indeed starting to become biased. I hope not, and I'm hopefully just overthinking it.

 

Funny that you mentioned it - I've unsubscribed Linus channel for something that I found to very biased... and I've stood up to them in other topics, and not so long ago.

 

Today I decided to see their take on this benchmarks, and to my surprise there was no mention at all... I assumed they would cover it, specially because I thought they valued Ryan's work @ PCper ... but apparently this was a no subject to them.. just like what NVIDIA aimed to do with their press release ... "look away from this"... So yeah, I'll remain unsubscribed and probably I'll part away from this community soon as well (for the likes of many :D )...

 

It's sad because, like many of you for sure, watched linus grow on YT... but the path he's taking it's very ...odd?... I cannot tell what is paid media and what is not anymore... and I'm a marketeer.

Link to comment
Share on other sites

Link to post
Share on other sites

@bogus I did find it quite odd that the Acer XR341CK ultrawide freesync didn't get editor's choice for the sole argument that it wasn't Gsync. That's outright vendor bias. It should never be a critique that something doesn't have a proprietary vendor specific technology.

 

But as mentioned, that WAN show was probably mostly for the people who showed up there; a meet and greet that takes time and energy to do, so I understand why they didn't cover everything interesting.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

@bogus I did find it quite odd that the Acer XR341CK ultrawide freesync didn't get editor's choice for the sole argument that it wasn't Gsync. That's outright vendor bias. It should never be a critique that something doesn't have a proprietary vendor specific technology.

 

But as mentioned, that WAN show was probably mostly for the people who showed up there; a meet and greet that takes time and energy to do, so I understand why they didn't cover everything interesting.

 

Wonder if he wouldn't give an editors choice to a g-sync monitor because it didn't have freesync. Just watched that video, kinda snotty reason not to give it editors choice. Fanboy Linus confirmed.

Case: Phanteks Enthoo Pro | PSU: Enermax Revolution87+ 850W | Motherboard: MSI Z97 MPOWER MAX AC | GPU 1: MSI R9 290X Lightning | CPU: Intel Core i7 4790k | SSD: Samsung SM951 128GB M.2 | HDDs: 2x 3TB WD Black (RAID1) | CPU Cooler: Silverstone Heligon HE01 | RAM: 4 x 4GB Team Group 1600Mhz

Link to comment
Share on other sites

Link to post
Share on other sites

Wonder if he wouldn't give an editors choice to a g-sync monitor because it didn't have freesync. Just watched that video, kinda snotty reason not to give it editors choice. Fanboy Linus confirmed.

 

It sucks because it is 75hz or 85hz overclocked. Once it will have atleast 100 or 120hz it will replace my monitor. Or just make it 144hz, damn Acer is it so hard to use two separate Displayport cables for the signal just like early 4K monitors?

Open your eyes and break your chains. Console peasantry is just a state of mind.

 

MSI 980Ti + Acer XB270HU 

Link to comment
Share on other sites

Link to post
Share on other sites

*people who think rationally answer.

Lets remember first free-sync monitors. They were bad, so obviously free-sync is bad and g-sync is the way to go. But look what's happening right now.

Same thing may or may not occur here.

 

FreeSync monitors are still bad.

Open your eyes and break your chains. Console peasantry is just a state of mind.

 

MSI 980Ti + Acer XB270HU 

Link to comment
Share on other sites

Link to post
Share on other sites

FreeSync monitors are still bad.

I meant intel planning to support it.

That doesn't change my point either way. 

Laptop: Acer V3-772G  CPU: i5 4200M GPU: GT 750M SSD: Crucial MX100 256GB
DesktopCPU: R7 1700x GPU: RTX 2080 SSDSamsung 860 Evo 1TB 

Link to comment
Share on other sites

Link to post
Share on other sites

FreeSync monitors are still bad.

 

plz explain yourself. The best gsync and freesync monitors use the same panel. The freesync version is 220 cheaper atm. Panel AU Optronics M270DAN02.3 AHVA 

 

Source 1 

Source 2

Test ideas by experiment and observation; build on those ideas that pass the test, reject the ones that fail; follow the evidence wherever it leads and question everything.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×