Jump to content

AMD FreeSync VS Nvidia G-Sync (Tom's Hardware)

Rekx

I don't see Nvidia going bankrupt any time soon, AMD is another story though.

I think you had better read it  again, I said nothing about Nvidia going bankrupt.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

GameWorks does have some advantages on NVIDIA hardware yeah... but for the most part? PhysX hasn't quite taken off as much as Havok it seems... that or some companies just used in-game physics engines like Bugbear did with their racing games or Cryengine (unless that changed since the original Cryengine lol I dunno).

The Gameworks streaming via Shadowplay? It's been touted as very optimized. Lots of people say that AMD's solution they used to have before they dumped it in favor of that partnership with Raptr was very similar... I forget the name of it though... so sadly AMD's solution is still inferior.

I dunno if you ever heard of Cmoar VR, but they even recommend NVIDIA's solution over other software solutions for streaming.

http://www.cmoar.com/streaming-nvidia-limelight.html

https://www.kickstarter.com/projects/706938033/cmoar-virtual-reality-headset-with-integrated-elec/description

That's the one I backed tbh. Cheap but decent VR solution. I didn't feel like going all out on a Oculus headset or anything is why lol. :P

 

Pretty much my thoughts too... I'd love to jump up to a monitor like that. :P

 

Yes and no... Right now? Freesync's weakest link is the scalar that manufacturer's use. It makes you wonder... if they went with a better scalar if it would jack the price up that much? Or would it even out the playing field? Before I was thinking of jumping ship on my upgrade to AMD for a FreeSync QHD IPS 144Hz with AMD's Fury X which on Amazon Canada had a typo of $700 CAD lol... but I was hesitant due to the limited VRR...

But with the tearing on the higher framerate? Hmm... can anyone explain to me how AMD's freesync would work with VSYNC on please? Would it essentially disable freesync and just act like VSYNC? Or would VSYNC only engage when it went above or below the VRR range of the Freesync monitor?

From what I've heard and some stuff I've seen? Enabling G-Sync actually has a minor performance loss... I think it was 1% to 4% so it's really minor anyways.

 

The problem is not just APEX (PhysX) based GameWorks effects, but also tessellation based effects, as AMD does not have access to optimize the code (neither do most of the devs using it).

 

When it comes to shadowplay (so game streaming to a different unit, and not twitch streaming right?), Steam can stream at unlimited bandwidth, at full resolution (I assume 4K too), etc, so that should either match or beat NVidia streaming. For non steam games, you can still (usually) stream through the client by making a shortcut.

 

The "problem" with Adaptive Sync, is that the quality and performance are up to the monitor vendor and/or the monitor controller/scaler (not scalar) vendor, not AMD: That is how an open standard works, and is perfectly fine, as it's those two parties that makes the money off of it, not AMD. It apparently takes some time for these scaler vendors to make this work, which makes sense, as it is fairly difficult to get to work properly. But the monitors gets better and better and should reach Gsync performance on all accounts soon.

The minimum windows could be solved by AMD having a 1 frame buffer, thus multiplying that frame to the monitor, just like the Gsync monitor does internally (and why it has RAM on the monitor).

 

Gsync has a tiny latency due to having a RAM buffer in the monitor: http://www.blurbusters.com/gsync/preview2/

 

AMD has something called frame rate control, where you can set a max FPS between 55 and 95 (hope they change that to 144), which causes DX10+ games to be limited to set value. So you would have normal variable fps up to that limit. If you have a freesync monitor, you would have your VRR window and then frc where the max VRR window ends. So no VSync stutter, and no tearing above the VRR window.

 

If you get a freesync monitor and use it with an NVidia card, you would just have a normal monitor, with proper colour settings, multiple inputs for consoles, laptops etc, and usually not at a price premium. The same cannot be said about Gsync monitors.

 

The real question in my mind is mobile g-sync actually distinct from free-sync? How would you know?

 

Both uses Variable VBlanks to control the scanning of frames. But since laptops don't have any scalers at all, the mobile GSync is all in the graphics card and controlled either by a small chip (unlikely) or the driver (very likely).

 

Ok then for the record say vendor lock is bad. Don't say G-sync is bad. That sort of double standard was what pissed me off. I don't give a shit which way you want to go. I do however care about hypocrisy 

 

I would argue that even if Intel supports the standard (which I don't believe they ever will because they do a very very good job not picking sides, even check out api implementations) it wont be available for quite sometime (cannonlake or later) making it still a locked platform for the foreseeable future. 

 

Adaptive Sync has been approved by VESA, which includes NVidia and Intel for the record. Intel would not "pick sides" by supporting an open standard. They would by buying access to a proprietary solution, that I doubt they would get to anyways.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

@Notional I agree its driver based (in-fact I'm fairly certain we know for a fact its driver based) so my question was 'how likely is it that mobile g-sync is actually a different implementation than free-sync and even if it was, how would you know?' 

 

As to intel and free-sync I guess we will see if they try to implement it. I am dubious.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

The problem is not just APEX (PhysX) based GameWorks effects, but also tessellation based effects, as AMD does not have access to optimize the code (neither do most of the devs using it).

 

When it comes to shadowplay (so game streaming to a different unit, and not twitch streaming right?), Steam can stream at unlimited bandwidth, at full resolution (I assume 4K too), etc, so that should either match or beat NVidia streaming. For non steam games, you can still (usually) stream through the client by making a shortcut.

 

The "problem" with Adaptive Sync, is that the quality and performance are up to the monitor vendor and/or the monitor controller/scaler (not scalar) vendor, not AMD: That is how an open standard works, and is perfectly fine, as it's those two parties that makes the money off of it, not AMD. It apparently takes some time for these scaler vendors to make this work, which makes sense, as it is fairly difficult to get to work properly. But the monitors gets better and better and should reach Gsync performance on all accounts soon.

The minimum windows could be solved by AMD having a 1 frame buffer, thus multiplying that frame to the monitor, just like the Gsync monitor does internally (and why it has RAM on the monitor).

 

Gsync has a tiny latency due to having a RAM buffer in the monitor: http://www.blurbusters.com/gsync/preview2/

 

AMD has something called frame rate control, where you can set a max FPS between 55 and 95 (hope they change that to 144), which causes DX10+ games to be limited to set value. So you would have normal variable fps up to that limit. If you have a freesync monitor, you would have your VRR window and then frc where the max VRR window ends. So no VSync stutter, and no tearing above the VRR window.

 

If you get a freesync monitor and use it with an NVidia card, you would just have a normal monitor, with proper colour settings, multiple inputs for consoles, laptops etc, and usually not at a price premium. The same cannot be said about Gsync monitors.

 

 

Both uses Variable VBlanks to control the scanning of frames. But since laptops don't have any scalers at all, the mobile GSync is all in the graphics card and controlled either by a small chip (unlikely) or the driver (very likely).

 

 

Adaptive Sync has been approved by VESA, which includes NVidia and Intel for the record. Intel would not "pick sides" by supporting an open standard. They would by buying access to a proprietary solution, that I doubt they would get to anyways.

Yeah, NVIDIA not using a VESA standard does not make said standard "vendor lock-in".

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Why buy one that can use both? Simple, because you might not buy the same brand GPU in your next upgrade.

 

People don't upgrade their monitors often. Most gamers buy a monitor once, and that monitor follows them through 3-4 upgrades. They tend to keep it until it dies. I've been using the same monitor for like 7+ years (1680 x 1050). I will eventually upgrade that monitor to a better one, but my point is that I've kept it for that long. In that time span I've owned both AMD and NVIDIA GPU's.

 

If I had bought a G-Sync monitor 7 years ago (hypothetically speaking of course), then I'd be stuck with NVIDIA that entire time. I got my HD 7950 at a way better deal then an NVIDIA 680 or even 670 was. I'd have had to spend more money to get a GPU that arguably either wasn't better at all, or was barely better.

 

You're planning on buying a G-Sync monitor. Which is great right now, for you. What happens if next year AMD comes out with a killer GPU that blows away NVIDIA? You're either gonna completely ignore it and buy NVIDIA anyway, or you'll be pissed that G-Sync (which is awesome) doesn't work on your new killer AMD GPU.

But is there a problem with sticking with one GPU vendor. I also stick with EVGA because of their warranty, customer service and their GPU shroud designs. They look more industrial which my builds go for. Besides would it not be possible for AMD to use G-Sync can G-Sync not be licensed out or is it actually known that that isn't possible.

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

While AMD has really dropped the ball in the CPU department, there are still plenty of highly competitive AMD CPU's in the low end budget range of Gaming PC's. Getting an Athlon CPU, for example, instead of a Pentium G series, is still not a bad idea. You'd have to weigh specific price vs performance at any given time since the prices can vary over time.

 

For the GPU market, AMD is plenty competitive still. In almost every bracket, they're still competitive - often with hardware that is 3-4 years old. Fiji was a nice step in the right direct, but there's obviously still progress to be made.

 

We'll also see next year whether ZEN brings AMD back into the high end CPU game or not.

AMD didn't have a competitor to the 5820K when I bought it. Actually they don't have anything to compete with Intel CPU's at this moment in time other then APU which Intel is coming from behind them :)

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

But is there a problem with sticking with one GPU vendor. I also stick with EVGA because of their warranty, customer service and their GPU shroud designs. They look more industrial which my builds go for. Besides would it not be possible for AMD to use G-Sync can G-Sync not be licensed out or is it actually known that that isn't possible.

In theory, NVIDIA could license out it's proprietary technology to anyone it wanted, just like PhysX. Will they? Fat chance in hell. When has NVIDIA ever licensed their proprietary tech out to AMD?

 

There's a much greater likelihood that NVIDIA will eventually adopt Adaptive-Sync compatibility under it's own G-Sync branding.

 

Sticking with one GPU vendor? Certainly, you can choose to do that. It makes you biased though towards that vendor. You should objectively compare each new GPU as it comes out, even variants among the AIB partners. You might prefer what a current AIB partner has done in the past, and what warranty it traditionally offers, but you cannot know what other AIB's will come out with in the next generation. Sometimes they will change up their design paradigm, or revamp their warranty. If you just dismiss all other AIB's aside from your "favourite" without even examining them, you might even miss such a redesign or warranty policy change.

 

AMD didn't have a competitor to the 5820K when I bought it. Actually they don't have anything to compete with Intel CPU's at this moment in time other then APU which Intel is coming from behind them :)

AMD still competes with several Intel CPU's actually. The G series and i3 series specifically, and of course the APU segment. We all know that they no longer compete on the i5 and above segment, but to say that they don't compete at all is disingenuous.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, NVIDIA not using a VESA standard does not make said standard "vendor lock-in".

 

Technically it's not locked because, as you say, it's a standard that can be taken up anytime a vendor wants. However (for all intents and purposes) currently the end result is the same.   And that is something the consumer must consider when buying into this tech.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

In theory, NVIDIA could license out it's proprietary technology to anyone it wanted, just like PhysX. Will they? Fat chance in hell. When has NVIDIA ever licensed their proprietary tech out to AMD?

 

There's a much greater likelihood that NVIDIA will eventually adopt Adaptive-Sync compatibility under it's own G-Sync branding.

 

Sticking with one GPU vendor? Certainly, you can choose to do that. It makes you biased though towards that vendor. You should objectively compare each new GPU as it comes out, even variants among the AIB partners. You might prefer what a current AIB partner has done in the past, and what warranty it traditionally offers, but you cannot know what other AIB's will come out with in the next generation. Sometimes they will change up their design paradigm, or revamp their warranty. If you just dismiss all other AIB's aside from your "favourite" without even examining them, you might even miss such a redesign or warranty policy change.

 

AMD still competes with several Intel CPU's actually. The G series and i3 series specifically, and of course the APU segment. We all know that they no longer compete on the i5 and above segment, but to say that they don't compete at all is disingenuous.

It's entirely possible that the G-Sync module has all the necessary hardware to use Freesync. Or not.

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

It's entirely possible that the G-Sync module has all the necessary hardware to use Freesync. Or not.

The G-Sync module isn't required for Adaptive-Sync.

 

Unless you mean an AMD GPU controlling a G-Sync module?

 

In either case, I would say that yes, AMD GPU's with DP 1.2a probably could be modified at the BIOS/firmware/software level to be compatible with G-Sync. I don't think that's a stretch at all.

 

Whether it's possible or not is completely irrelevant though. The chances of NVIDIA giving AMD access to G-Sync licenses is basically non-existent. Why would they? They would have no motivation what-so-ever to do so, since it would prevent Vendor Lock-in - which NVIDIA wants.

 

 

Technically it's not locked because, as you say, it's a standard that can be taken up anytime a vendor wants. However (for all intents and purposes) currently the end result is the same.   And that is something the consumer must consider when buying into this tech.

Yes, it's certainly something a buyer should consider if they plan to buy an Adaptive-Sync monitor, but I'd feel better about my chances of buying an Adaptive-Sync monitor that NVIDIA eventually supports, over buying a G-Sync monitor and praying that AMD gets access and license to use the tech.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Heyyo,

The problem is not just APEX (PhysX) based GameWorks effects, but also tessellation based effects, as AMD does not have access to optimize the code (neither do most of the devs using it).

PhysX? Yeah that part is true that NVIDIA should have made it so PhysX could be GPU accelerated for AMD GPUs much like Hairworks can be. Like I said in my post, Havok tends to be more popular which works out ok.

I'm pretty sure if you're talking about The Witcher 3 and CD Projekt Red thinking it was a good idea to force x64 tessellation on all PCs for Hairworks? That was their own dumb fault. Same with Anti-Aliasing on the hair... both of which they added sliders for, albeit the slider for tessellation isn't labeled as clearly... It's ridiculous it had to be post-release that us gamers had to complain that they fucked it up. I doubt NVIDIA was all "You have to force x8 Anti-Aliasing and x64 tessellation on Hairworks to use it." :P

Same can be said historically with Half-Life 2 and Counter-Strike: Source on release.. I doubt ATi told Valve "Hey, you have to force the source engine to use 24bit shader code by default even on NVIDIA GPUs which are optimized for 16bit shader code but doesn't change the image quality." ;)

The "problem" with Adaptive Sync, is that the quality and performance are up to the monitor vendor and/or the monitor controller/scaler (not scalar) vendor, not AMD: That is how an open standard works, and is perfectly fine, as it's those two parties that makes the money off of it, not AMD. It apparently takes some time for these scaler vendors to make this work, which makes sense, as it is fairly difficult to get to work properly. But the monitors gets better and better and should reach Gsync performance on all accounts soon.

The minimum windows could be solved by AMD having a 1 frame buffer, thus multiplying that frame to the monitor, just like the Gsync monitor does internally (and why it has RAM on the monitor).

 

Gsync has a tiny latency due to having a RAM buffer in the monitor: http://www.blurbusters.com/gsync/preview2/

 

AMD has something called frame rate control, where you can set a max FPS between 55 and 95 (hope they change that to 144), which causes DX10+ games to be limited to set value. So you would have normal variable fps up to that limit. If you have a freesync monitor, you would have your VRR window and then frc where the max VRR window ends. So no VSync stutter, and no tearing above the VRR window.

Interesting, frame rate control does sound handy... why wasn't it used in this comparison then? hmm... oh well.

If you get a freesync monitor and use it with an NVidia card, you would just have a normal monitor, with proper colour settings, multiple inputs for consoles, laptops etc, and usually not at a price premium. The same cannot be said about Gsync monitors.

Yeah, but at the same time, if you're not comfortable with that thought? Why buy a Gsync monitor in the first place? Might as well buy a non-Gsync monitor and save the cash and continue on with life. That's the whole thing with the survey they did what you'd be willing to pay extra for Gsync or not... technically? If they jacked up the scalar on those AMD Freesync monitors to match what NVIDIA Gsync does, odds are that price-gap will be barely there, much like the price gap between the AMD R9 390 and GTX 970 is barely there.

 

Both uses Variable VBlanks to control the scanning of frames. But since laptops don't have any scalers at all, the mobile GSync is all in the graphics card and controlled either by a small chip (unlikely) or the driver (very likely).

Yeah that is true that it's controlled by the GPU itself. PCPer did a thing about it back when that leaked "free gsync" driver floated around:

http://www.pcper.com/reviews/Graphics-Cards/Mobile-G-Sync-Confirmed-and-Tested-Leaked-Alpha-Driver

On the second page you'll note that they ripped apart the notebook and found nothing of difference.

adaptive-sync in general is a definite win for gamers. It's too bad it couldn't quite be handled software-wise like NVIDIA addressed input lag with Adaptive-Vsync but still led to screen tearing issues.

The only other kind-of soluction I remember tinkering with? Lucid's VirtuMVP's virtual-vsync.

Heyyo,

My PC Build: https://pcpartpicker.com/b/sNPscf

My Android Phone: Exodus Android on my OnePlus One 64bit in Sandstone Black in a Ringke Fusion clear & slim protective case

Link to comment
Share on other sites

Link to post
Share on other sites

The beauty of the adaptive sync standard, is that it literally cannot fail due to poor sales. Even if AMD were to disappear off the map and shut down, adaptive sync would still be there in every new monitor. Whether a company wants to update firmware to support Freesync is their call, but it doesn't stop intel from taking full advantage of adaptive sync as-is. 

 

I still think adaptive sync/freesync would have been much better using a separate USB cable going from the monitor to the pc, and using the cpu to sync up frames instead of the GPU (technically the cpu is anyway). The cpu is issuing the instructions to the gpu, so bypassing the gpu would actually decrease driver latency.

 

that being said, I've personally not experienced any ghosting in-game, screen-tearing or motion blur from using freesync, but then I don't game below 40 FPS or above 144 so maybe I'm doing it wrong.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

@Notional I agree its driver based (in-fact I'm fairly certain we know for a fact its driver based) so my question was 'how likely is it that mobile g-sync is actually a different implementation than free-sync and even if it was, how would you know?' 

 

As to intel and free-sync I guess we will see if they try to implement it. I am dubious.

 

Well freesync is just AMD's driver implementation, that uses Adaptive Sync. AS is just as a standard that makes it possible for a GPU to control Variable Vblank from eDP, via a communication standard. Mobile Gsync uses neither, but probably just Variable VBlank in eDP. In the end it makes little difference, when a laptop is proprietary by definition.

 

AMD didn't have a competitor to the 5820K when I bought it. Actually they don't have anything to compete with Intel CPU's at this moment in time other then APU which Intel is coming from behind them :)

 

5820K is not a consumer chip, and definitly not a gaming chip. The X chipsets and CPU's are prosumer/workstation chips.

 

I still think adaptive sync/freesync would have been much better using a separate USB cable going from the monitor to the pc, and using the cpu to sync up frames instead of the GPU (technically the cpu is anyway). The cpu is issuing the instructions to the gpu, so bypassing the gpu would actually decrease driver latency.

 

None of that makes any sense what so ever. What would the upside to that be? You would need an extra cable, take away power from the GPU and introduce lots of latency.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I'm pretty sure if you're talking about The Witcher 3 and CD Projekt Red thinking it was a good idea to force x64 tessellation on all PCs for Hairworks? That was their own dumb fault. Same with Anti-Aliasing on the hair... both of which they added sliders for, albeit the slider for tessellation isn't labeled as clearly... It's ridiculous it had to be post-release that us gamers had to complain that they fucked it up. I doubt NVIDIA was all "You have to force x8 Anti-Aliasing and x64 tessellation on Hairworks to use it." :P

Interesting, frame rate control does sound handy... why wasn't it used in this comparison then? hmm... oh well.

Yeah, but at the same time, if you're not comfortable with that thought? Why buy a Gsync monitor in the first place? Might as well buy a non-Gsync monitor and save the cash and continue on with life. That's the whole thing with the survey they did what you'd be willing to pay extra for Gsync or not... technically? If they jacked up the scalar on those AMD Freesync monitors to match what NVIDIA Gsync does, odds are that price-gap will be barely there, much like the price gap between the AMD R9 390 and GTX 970 is barely there.

 

Yeah that is true that it's controlled by the GPU itself. PCPer did a thing about it back when that leaked "free gsync" driver floated around:

http://www.pcper.com/reviews/Graphics-Cards/Mobile-G-Sync-Confirmed-and-Tested-Leaked-Alpha-Driver

 

On the second page you'll note that they ripped apart the notebook and found nothing of difference.

adaptive-sync in general is a definite win for gamers. It's too bad it couldn't quite be handled software-wise like NVIDIA addressed input lag with Adaptive-Vsync but still led to screen tearing issues.

 

CDPR has officially stated that they could not optimize Hairworks, which indicates that they just have basic GameWorks access. That means they are only provided with black boxed DLL extensions. This means they probably were not able to set the tessellation level themselves. I still don't know if the new patches gives that ability, but I've yet to see any evidence that it does.

 

The AA level has always been possible to set on HairWorks (although it took some ini file setting in the beginning).

 

Frame rate control was released with Fiji afaik, so it hadn't been launched when the article was written. Also it's only about Gsync, so AMD tech is not present.

 

Problem is that the Gsync monitor if an FPGA processor with RAM and all sorts. It's more of a computer than a standard monitor controller/scaler. It'e expensive to make, over complicated, and has limited functionality. "Beefing" up a standard monitor controller would make little difference. That's what makes Adaptive Sync so clever: It doesn't need all that much added fluff, and can therefore be manufactured at a much lower cost (why we've seen AS monitors that costs the same as a non AS monitor). All because the graphics card dictates the monitor in a simple one way communication.

 

The thing is as an industry standard, NVidia is free to adopt AS without cost right now. They can even call their driver solution Gsync, like AMD calls their driver solution Freesync. So even though AS technically brings vendor lock in with AMD itself, it's not AMD's fault, but NVidia's as they choose not to adopt a VESA DisplayPort standard.

 

That gsync driver for the Asus laptop proved two things:

  1. You can get Gsync without a stupidly expensive module, which NVidia claimed was not possible.
  2. NVidia obviuosly knows how to write a driver, that can chane Variable VBlanks on eDP, so really it should just be a matter of adding the AS standards framework around that, with their added knowledge of Gsync. SO it should not a a problem of releasing AS support at all.

Yeah I guess only the remaining TCON needs to be able support such a thing. Maybe all do by default, idk.

 

Adaptive Sync IS freesync (the hardware side), so it has nothing to do with NVIdia's proprietary Adaptive VSync. A monitor still needs the hardware to support the AS standard, so the GPU can take control of the scanning intervals.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

5820K is not a consumer chip, and definitly not a gaming chip. The X chipsets and CPU's are prosumer/workstation chips.

 

 

I'm a consumer and I managed to buy it. What makes something consumer and other things not. Also it games just fine on it actually. I have much better performance going from a 2600K.

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

5820K is not a consumer chip, and definitly not a gaming chip. The X chipsets and CPU's are prosumer/workstation chips.

 

 

I'm a consumer and I managed to buy it. What makes something consumer and other things not. Also it games just fine on it actually. I have much better performance going from a 2600K.

 

Nothing prevents you from buying such chips, so of course you managed that. Intel defines their products and markets, so they decide. The 5820K is a Haswell running at lower speeds than a 4790k for instance, making the 5820K worse for gaming. The 5960x so many people love, is actually a bad gaming chip, as it runs about 800 MHz slower per core than the 4790K. The double amounts of cores are not utilized by any games (yet. Maybe DX12/Vulkan will change that).

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Nothing prevents you from buying such chips, so of course you managed that. Intel defines their products and markets, so they decide. The 5820K is a Haswell running at lower speeds than a 4790k for instance, making the 5820K worse for gaming. The 5960x so many people love, is actually a bad gaming chip, as it runs about 800 MHz slower per core than the 4790K. The double amounts of cores are not utilized by any games (yet. Maybe DX12/Vulkan will change that).

Games don't even take advantage of 4 cores so it hardly makes any difference. Besides CPU's are not all that important when it comes to games. If you get anything half good you will run games just great. I get about 120fps on Elite Dangerous max settings at 1560x1440 in the middle of a dog fight. On a 680 to which is really quite good. In war thunder I am getting around 100fps at the same resolution. Also the 5820k is a k not an X.

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

Games don't even take advantage of 4 cores so it hardly makes any difference. Besides CPU's are not all that important when it comes to games. If you get anything half good you will run games just great. I get about 120fps on Elite Dangerous max settings at 1560x1440 in the middle of a dog fight. On a 680 to which is really quite good. In war thunder I am getting around 100fps at the same resolution. Also the 5820k is a k not an X.

 

They take advantage of core speed (MHz). That is why Total Biscuit hit 99-100% on core 1 in Dying Light on his 5960x. Had he had a 4790K, he would either have less CPU usage on core 1 or get higher FPS.

Depends on the games. Some are CPU limited, some GPU limited.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

They take advantage of core speed (MHz). That is why Total Biscuit hit 99-100% on core 1 in Dying Light on his 5960x. Had he had a 4790K, he would either have less CPU usage on core 1 or get higher FPS.

Depends on the games. Some are CPU limited, some GPU limited.

It's still really stupid how games can't use much of our hardware. Especially CPU's I have been sitting in some games while my CPU is at idle and my 680 has about 50% usage. If only game engines where designed to use more of the resources on a system things would be much better. Still that's why we have settings in game menus.

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

It's still really stupid how games can't use much of our hardware. Especially CPU's I have been sitting in some games while my CPU is at idle and my 680 has about 50% usage. If only game engines where designed to use more of the resources on a system things would be much better. Still that's why we have settings in game menus.

 

48% of all steam users only have 2 CPU cores. Only <5% of users have more than 4 CPU cores. It makes no sense for developers to write heavily multithreaded code for such a small user base. Again things might change with DX12/Vulkan, but so far we will hardly see anything use more than 4 cores anytime soon.

 

CPU's can't really do anything graphical per se. With DX12 and AMD you might be able to leverage some of the iGPU power of the APU, if the game is made for it.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

idk about these results, they are interesting, but the sample size is quite small

 

Also found the "fanboy" pie to be revealing ( didn't realize there where so many self identified fanboys )

I'm not gonna attack you, but I feel like just because they identify with Nvidia or AMD doesn't mean they are fanboys, maybe they just enjoy the features that are exclusive to one brand versus the other. The only way they're fanboys is if they flat-out say the other brand has horrible products or the people who buy them are stupid. The agnostics are people who prefer both depending on what kind of a build they're trying to make. Personally, I'm agnostic, if I'm building budget PC's for my friends, I tend to go with AMD because of their better price to performance ratio. If I'm making high end builds for myself or my family, I'm generally gonna go with Nvidia because their line-up is more expensive but generally more interesting in terms of features and performance. They're both fantastic companies with amazing cards. However, that's my opinion, so take it with a grain of salt. 

Link to comment
Share on other sites

Link to post
Share on other sites

Heyyo,

CDPR has officially stated that they could not optimize Hairworks, which indicates that they just have basic GameWorks access. That means they are only provided with black boxed DLL extensions. This means they probably were not able to set the tessellation level themselves. I still don't know if the new patches gives that ability, but I've yet to see any evidence that it does.

 

The AA level has always been possible to set on HairWorks (although it took some ini file setting in the beginning).

That makes absolutely no sense to me... they had ways of setting AA on hairworks from the beginning... but not tessellation? You see how that doesn't make sense right? I think it was patch 1.7 that added a slider for "NVIDIA Hairworks Preset" which is probably the tessellation slider.

TFiUMcu.png

Of course I haven't bought a COD game in years lol so I have no idea how I think COD Ghost handles hairworks neither but maybe someone can give some insight into that???

Sadly I have no idea how to code beyond basic script from GTA III... so I have no idea tbh how easy or hard it is to implement and tweak.

Frame rate control was released with Fiji afaik, so it hadn't been launched when the article was written. Also it's only about Gsync, so AMD tech is not present.

After reading over what frame rate control is? That's the same as using Rivia Tuner Statistics Server and setting max fps in that... NVIDIA has it built into their driver as well but it's a hidden tweak that can be used via NVIDIA Inspector. It's still a decent idea but it's nothing new tbh. I found using a framerate cap of 30fps on Final Fantasy XIII and XIII-2 made the game a lot more playable as it wouldn't constantly flip between slower and faster animations in combat from frametime variance. I even plopped down a guide to help those with the same issue as me.

http://steamcommunity.com/sharedfiles/filedetails/?id=388731782

Problem is that the Gsync monitor if an FPGA processor with RAM and all sorts. It's more of a computer than a standard monitor controller/scaler. It'e expensive to make, over complicated, and has limited functionality. "Beefing" up a standard monitor controller would make little difference. That's what makes Adaptive Sync so clever: It doesn't need all that much added fluff, and can therefore be manufactured at a much lower cost (why we've seen AS monitors that costs the same as a non AS monitor). All because the graphics card dictates the monitor in a simple one way communication.

 

The thing is as an industry standard, NVidia is free to adopt AS without cost right now. They can even call their driver solution Gsync, like AMD calls their driver solution Freesync. So even though AS technically brings vendor lock in with AMD itself, it's not AMD's fault, but NVidia's as they choose not to adopt a VESA DisplayPort standard.

 

That gsync driver for the Asus laptop proved two things:

  • You can get Gsync without a stupidly expensive module, which NVidia claimed was not possible.
  • NVidia obviuosly knows how to write a driver, that can chane Variable VBlanks on eDP, so really it should just be a matter of adding the AS standards framework around that, with their added knowledge of Gsync. SO it should not a a problem of releasing AS support at all.
Yeah I guess only the remaining TCON needs to be able support such a thing. Maybe all do by default, idk.

 

Adaptive Sync IS freesync (the hardware side), so it has nothing to do with NVIdia's proprietary Adaptive VSync. A monitor still needs the hardware to support the AS standard, so the GPU can take control of the scanning intervals.

True, NVIDIA could very well adopt AMD's Freesync... but ultimately? It's their choice not to. In the end? A better solution would have been for VESA to make an industry standard and publish it themselves instead of what we have now of "here's the basics on how it works, go make your own solution with it" which has caused this splinter...

Nothing prevents you from buying such chips, so of course you managed that. Intel defines their products and markets, so they decide. The 5820K is a Haswell running at lower speeds than a 4790k for instance, making the 5820K worse for gaming. The 5960x so many people love, is actually a bad gaming chip, as it runs about 800 MHz slower per core than the 4790K. The double amounts of cores are not utilized by any games (yet. Maybe DX12/Vulkan will change that).

It's also my hopes and dreams too... more performance out of my i7-3770k would be handy! :)

It's still really stupid how games can't use much of our hardware. Especially CPU's I have been sitting in some games while my CPU is at idle and my 680 has about 50% usage. If only game engines where designed to use more of the resources on a system things would be much better. Still that's why we have settings in game menus.

No doubt. Mainly? DirectX 9 support from game devs needs to really die or become an afterthought or something lol. I seriously don't know anyone on Vista or XP that game... their PCs are essentially social media with Google Chrome or Firefox with adblock plus on it with malware blocking on to prevent me from having to fix the things so darn much. :P

World of Tanks is definitely the prime example of how their BigWorld engine needs a port from DirectX 9 to DirectX 11 or heck! skip that entirely and go to DirectX 12! That game used to drive me nuts! :(

My PC is linked in my sig for specs...

LSJ7q90.jpg

so yeah... I got a beefy enough PC with even a dedicated SSD for World of Tanks... 48.4 fps... why? BigWorld Engine (aka BugWorld Engine) is single-threaded... even though it's almost been a decade that quad core CPUs have been out (Intel Core2Quad Q6600 released in 2006).

Both GPUs are at 50% utilization and 60c and 1.3GB VRAM used...

about 4GB of RAM used on the game...

CPU? Four cores on IDLE, one at 44% (physics is my guess), one at 23% (FMOD sound system), one at 13% (Windows OS) and one at high load of 85%... thanks WarGaming. fix your damn game engine already... BugWorld... you annoy me lol.

The game doesn't have multi-threading... it has "idle core detection" which flips the computing to the most idle threads of the bunch... oh yay. :P

To their credit though in patc 9.9? They revamped the lighting and shadow shaders and implemented a REALLY good version of Anti-Aliasing... Temporal Super Sampling Anti-Aliasing (TSSAA). It's definitely one of the best versions of AA I've seen yet. :)

So it is a lot more optimized than when I took that screenshot in version 9.5 I think it was.

Heyyo,

My PC Build: https://pcpartpicker.com/b/sNPscf

My Android Phone: Exodus Android on my OnePlus One 64bit in Sandstone Black in a Ringke Fusion clear & slim protective case

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not gonna attack you, but I feel like just because they identify with Nvidia or AMD doesn't mean they are fanboys, maybe they just enjoy the features that are exclusive to one brand versus the other. The only way they're fanboys is if they flat-out say the other brand has horrible products or the people who buy them are stupid. The agnostics are people who prefer both depending on what kind of a build they're trying to make.

 

The question was whether they were fans. That's basically the same kind of thing as fanboy, the latter's just the extreme version where people go internet crusading for their team.

Link to comment
Share on other sites

Link to post
Share on other sites

5820K is not a consumer chip, and definitly not a gaming chip. The X chipsets and CPU's are prosumer/workstation chips.

 

Unless you want to run SLI or Xfire at either a) higher than 8x 8x or b ) 3 way.

Link to comment
Share on other sites

Link to post
Share on other sites

Unless you want to run SLI or Xfire at either a) higher than 8x 8x or b ) 3 way.

The kinds of setups you're talking about are most definitely in the "prosumer" category. At that point, you've left the regular consumer market and stepped into the prosumer/high-end enthusiast market

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×