Jump to content

so, what is nVidia's Fast Sync introduced with Pascal GPUs?

28 minutes ago, zMeul said:

because G-Sync becomes useful in the cases when frame rate drops below the normal refresh rate of the monitor - the G-Sync module will bring down the monitor's refresh to what the game's engine is capable of rendering

whereas FastSync works at the opposite side, where the framerate is way higher than monitor's refresh

 

they can totally work at the same time

Thanks.

Yeah so Fast Sync + G Sync complement each other.

CPU: i7 4790K | MB: Asus Z97-A | RAM: 32Go Hyper X Fury 1866MHz | GPU's: GTX 1080Ti | PSU: Corsair AX 850 | Storage: Vertex 3, 2x Sandisk Ultra II,Velociraptor | Case : Corsair Air 540

Mice: Steelseries Rival | KB: Corsair K70 RGB | Headset: Steelseries H wireless

Link to comment
Share on other sites

Link to post
Share on other sites

I watched someones video on this, probably Logan, probably Linus, i dont remember. And he talked about it and he showed a chart where it simply compared Vsync on, Vsync off, Fast Sync.

 

The result is: No screen tearing ! :)

Connection200mbps / 12mbps 5Ghz wifi

My baby: CPU - i7-4790, MB - Z97-A, RAM - Corsair Veng. LP 16gb, GPU - MSI GTX 1060, PSU - CXM 600, Storage - Evo 840 120gb, MX100 256gb, WD Blue 1TB, Cooler - Hyper Evo 212, Case - Corsair Carbide 200R, Monitor - Benq  XL2430T 144Hz, Mouse - FinalMouse, Keyboard -K70 RGB, OS - Win 10, Audio - DT990 Pro, Phone - iPhone SE

Link to comment
Share on other sites

Link to post
Share on other sites

Only relevant on the pro scene, but interesting none the less.

- snip-

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, watt said:

Did you read the source? Vsync is off, windows is preventing the tearing. I only pasted what was listed in the chart.

 

  • Windowed V-Sync:

It is possible to completely bypass control panel and in-game V-Sync options to utilize Windows’ native V-Sync implementation, which is also triple-buffered. By running the game in a window or borderless window, Windows handles the vertical synchronization, which can lead to smoother gameplay and lower input lag. Make sure you disable V-Sync in all other areas before doing so.

The article you linked doesn't provide a source. I'd love to know if anyone has any technical insight on the matter. I read somewhere Microsoft started forcing Vsync on all apps from Windows 8 forward? Windows 7 didn't do this as far as I know? Questions... But I'm probably going too far off-topic. 

 

I'm thinking about starting another thread when I feel like looking up and chewing through some documentation. There's a lot of conflicting information around. 

𝙶𝚊𝚖𝚎𝚜 𝚊𝚛𝚎 𝚙𝚛𝚎𝚝𝚝𝚢 𝚌𝚘𝚘𝚕
Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, zMeul said:

Tripple Buffering works with VSync ON - FastSync works with VSync OFF

Ugh, I know.... but it is effectively a HARDWARE implementation of the SOFTWARE Triple Buffering V-sync solution? Yes because that is exactly what is is attempting to do.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Curufinwe_wins said:

but it is effectively a HARDWARE implementation of the SOFTWARE Triple Buffering V-sync solution?

I cannot comment on that since the are a lot of conflicting info around Triple Buffering - even the Anand article is often cited in computer programmer circles and regarded as flawed

Tom Petersen, on stage, said that Triple Buffering doesn't drop frames and thus it creates back pressure - it's not true, you can totally drop frames with Triple Buffering

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:

I cannot comment on that since the are a lot of conflicting info around Triple Buffering - even the Anand article is often cited in computer programmer circles and regarded as flawed

Tom Petersen, on stage, said that Triple Buffering doesn't drop frames and thus it creates back pressure - it's not true, you can totally drop frames with Triple Buffering

hmm, either way it seems to address the same sort of situations triple buffering was introduced to try to solve, on a much broader and more general scale.

 

We will have to hope someone does a really good latency comparison between all of them at different fps levels.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Curufinwe_wins said:

hmm, either way it seems to address the same sort of situations triple buffering was introduced to try to solve, on a much broader and more general scale.

in the same circles, there is evidence that id Software used the same technique as FastSync 19y (!!) ago in Doom - it's still unclear if id used it in the subsequent id Tech Engine(s)

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/17/2016 at 2:47 PM, zMeul said:

this is beneficial to games that render at higher framerate than your monitor and you want as low latency as you get but you don't want tearing

VSync ON gets rid of tearing but adds latency

VSync OFF gets rid of the latency problem but adds tearing

the middle ground is FastSync - adds some latency for no tearing

Soo they made a proprietary Vsync optimized for their cards  essentially Vsync 2.0

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, FirstArmada said:

Soo they made a proprietary Vsync optimized for their cards  essentially Vsync 2.0

how can you call it VSync when FastSync uses VSync OFF?!

it's a form of buffering and not a form of frame pacing / synchronization with the display refresh rate

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:

how can you call it VSync when FastSync uses VSync OFF?!

it's a form of buffering and not a form of frame pacing / synchronization with the display refresh rate

I never said you needed it on i said its pretty much an alliteration of Vsync despite not working the same way it does the same thing

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/17/2016 at 1:32 PM, watt said:

I also thought this was very similar to borderless fullscreen or windowed with vsync off. No tearing, no fps cap and reduced latency compared to normal fullscreen vsync at least on AMD cards. Nvidia's fast sync looks like it's much faster though.

 

For comparison from here: http://www.displaylag.com/reduce-input-lag-in-pc-games-the-definitive-guide/

 

Vsync on = 102ms

Windowed vsync = 81ms

Vsync off = 61ms

 

I hope we get to play with this new feature soon.

 

 

what did they do to get those numbers?  Assuming someone hasn't done something incredibly stupid like leave "maximum prerendererd frames" at the default "3", vsync shouldn't add more than about 1 frame worth (~16.7 ms) worth of lag

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

This sounds cool, I like it.

Likely would still screw with Bethesda physics.

Maximums - Asus Z97-K /w i5 4690 Bclk @106.9Mhz * x39 = 4.17Ghz, 8GB of 2600Mhz DDR3,.. Gigabyte GTX970 G1-Gaming @ 1550Mhz

 

Link to comment
Share on other sites

Link to post
Share on other sites

Edit1:

The guy presenting in the video said (/spilled) that he expects many GPUs will support this

Spoiler
Quote

We are gonna make it available across all the GPUs that support it, I expect it to be fairly broad

Source: https://www.youtube.com/watch?v=WpUX8ZNkn2U&t=465

but nVidia on the other hand of-course always wants you to buy the new cards which are in this case the pascal ones. I expect this feature could be easily just included in the driver and most GPUs (on which the aspect of high FPS output even applies to) should support it.
Is it really confirmed that only Pascal GPUs will have Fast Sync?

 

Edit2: Can answer latter question regarding GSync myself now, answer is at this and later parts of the video

-------------------------------------------

So will GSync and Fast Sync be able to work together?

From what I know GSync forced vsync on at the beginning but also adapted the monitors refresh rate to the frame output of the card so that meant the framerate was capped. Later nVidia gave you the option to turn of VSync while still having that adaptive refresh rate feature through GSync.

Is there a point of GSync for games that always produce more FPS than the monitors refresh rate so essentially fast sync would be the better way here?

 

4d7a37d9b3.png

 

Which implies that Fast and adaptive (GSync) are exclusive and can't work together, is that true or is the option for GSync somewhere else and this only means what method of VSync will be used regardless of GSync controlling the refresh rate of the monitor or not?

The review where I pulled that screenshot from doesn't talk about GSync and only explains fast sync on its own...

Link to comment
Share on other sites

Link to post
Share on other sites

On 19/05/2016 at 0:17 AM, Celmor said:

Edit1:

The guy presenting in the video said (/spilled) that he expects many GPUs will support this

  Reveal hidden contents

but nVidia on the other hand of-course always wants you to buy the new cards which are in this case the pascal ones. I expect this feature could be easily just included in the driver and most GPUs (on which the aspect of high FPS output even applies to) should support it.
Is it really confirmed that only Pascal GPUs will have Fast Sync?

 

Edit2: Can answer latter question regarding GSync myself now, answer is at this and later parts of the video

-------------------------------------------

So will GSync and Fast Sync be able to work together?

From what I know GSync forced vsync on at the beginning but also adapted the monitors refresh rate to the frame output of the card so that meant the framerate was capped. Later nVidia gave you the option to turn of VSync while still having that adaptive refresh rate feature through GSync.

Is there a point of GSync for games that always produce more FPS than the monitors refresh rate so essentially fast sync would be the better way here?

 

4d7a37d9b3.png

 

Which implies that Fast and adaptive (GSync) are exclusive and can't work together, is that true or is the option for GSync somewhere else and this only means what method of VSync will be used regardless of GSync controlling the refresh rate of the monitor or not?

The review where I pulled that screenshot from doesn't talk about GSync and only explains fast sync on its own...

Adaptive sync is the nvidia implementation of console vsync where vsync turns on at the monitor refresh rate but disengages when a game takes too long, anything below 60fps or your equivalent will be vsync off anything above is vsync on 

Intel i5-3570K/ Gigabyte GTX 1080/ Asus PA248Q/ Sony MDR-7506/MSI Z77A-G45/ NHD-14/Samsung 840 EVO 256GB+ Seagate Barracuda 3TB/ 16GB HyperX Blue 1600MHZ/  750w PSU/ Corsiar Carbide 500R

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/18/2016 at 5:58 AM, MrDynamicMan said:

Only relevant on the pro scene, but interesting none the less.

Not really. If the slight additional latency over no VSync is imperceivable then this could be a beloved form of VSync. 

 

 

Case: Corsair 4000D Airflow; Motherboard: MSI ZZ490 Gaming Edge; CPU: i7 10700K @ 5.1GHz; Cooler: Noctua NHD15S Chromax; RAM: Corsair LPX DDR4 32GB 3200MHz; Graphics Card: Asus RTX 3080 TUF; Power: EVGA SuperNova 750G2; Storage: 2 x Seagate Barracuda 1TB; Crucial M500 240GB & MX100 512GB; Keyboard: Logitech G710+; Mouse: Logitech G502; Headphones / Amp: HiFiMan Sundara Mayflower Objective 2; Monitor: Asus VG27AQ

Link to comment
Share on other sites

Link to post
Share on other sites

So... is this simply Triple Buffering done on a hardware/driver level for DirectX?

Quote

The problem is that this is an nVidia product and scoring any nVidia product a "zero" is also highly predictive of the number of nVidia products the reviewer will receive for review in the future.

On 2015-01-28 at 5:24 PM, Victorious Secret said:

Only yours, you don't shitpost on the same level that we can, mainly because this thread is finally dead and should be locked.

On 2016-06-07 at 11:25 PM, patrickjp93 said:

I wasn't wrong. It's extremely rare that I am. I provided sources as well. Different devs can disagree. Further, we now have confirmed discrepancy from Twitter about he use of the pre-release 1080 driver in AMD's demo despite the release 1080 driver having been out a week prior.

On 2016-09-10 at 4:32 PM, Hikaru12 said:

You apparently haven't seen his responses to questions on YouTube. He is very condescending and aggressive in his comments with which there is little justification. He acts totally different in his videos. I don't necessarily care for this content style and there is nothing really unique about him or his channel. His endless dick jokes and toilet humor are annoying as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Shahnewaz said:

So... is this simply Triple Buffering done on a hardware/driver level for DirectX?

At 15:20 Tom explains that it is not.

Case: Corsair 4000D Airflow; Motherboard: MSI ZZ490 Gaming Edge; CPU: i7 10700K @ 5.1GHz; Cooler: Noctua NHD15S Chromax; RAM: Corsair LPX DDR4 32GB 3200MHz; Graphics Card: Asus RTX 3080 TUF; Power: EVGA SuperNova 750G2; Storage: 2 x Seagate Barracuda 1TB; Crucial M500 240GB & MX100 512GB; Keyboard: Logitech G710+; Mouse: Logitech G502; Headphones / Amp: HiFiMan Sundara Mayflower Objective 2; Monitor: Asus VG27AQ

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, MegaDave91 said:

At 15:20 Tom explains that it is not.

What he said was not correct, though. It doesn't cause "back pressure", as the GPU can flip/discard and re-write between the two back buffers independently, while the display uses the front buffer to refresh, exactly like how Fast Sync does. Nvidia just calls the second back buffer "Last Rendered Buffer", and everything else works the same way as Triple Buffering.

http://www.anandtech.com/show/2794/2

Quote

The problem is that this is an nVidia product and scoring any nVidia product a "zero" is also highly predictive of the number of nVidia products the reviewer will receive for review in the future.

On 2015-01-28 at 5:24 PM, Victorious Secret said:

Only yours, you don't shitpost on the same level that we can, mainly because this thread is finally dead and should be locked.

On 2016-06-07 at 11:25 PM, patrickjp93 said:

I wasn't wrong. It's extremely rare that I am. I provided sources as well. Different devs can disagree. Further, we now have confirmed discrepancy from Twitter about he use of the pre-release 1080 driver in AMD's demo despite the release 1080 driver having been out a week prior.

On 2016-09-10 at 4:32 PM, Hikaru12 said:

You apparently haven't seen his responses to questions on YouTube. He is very condescending and aggressive in his comments with which there is little justification. He acts totally different in his videos. I don't necessarily care for this content style and there is nothing really unique about him or his channel. His endless dick jokes and toilet humor are annoying as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Shahnewaz said:

What he said was not correct, though. It doesn't cause "back pressure", as the GPU can flip/discard and re-write between the two back buffers independently, while the display uses the front buffer to refresh, exactly like how Fast Sync does. Nvidia just calls the second back buffer "Last Rendered Buffer", and everything else works the same way as Triple Buffering.

http://www.anandtech.com/show/2794/2

not exactly

if my understanding is correct, Tripple Buffering represent 3 individual pages or frames, while Fast Sync uses a whole lot more - front buffer is one page, last rendered is the 2nd one, but .. the back buffer is a collection of xn pages since the render will not synchronize the rendering speed to the refresh rate

 

on it's own, Triple Buffering doesn't drop frames, the developer can, in fact, code it to drop frames when certain conditions are (not) met

 

here's one other key difference: with Triple Buffering, the back buffers are copied to the front buffer; while with newer APIs like DirectX, pages (buffers) are swapped - thus less delay

DX buffering technique is called page flip

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, zMeul said:

not exactly

if my understanding is correct, Tripple Buffering represent 3 individual pages or frames, while Fast Sync uses a whole lot more - front buffer is one page, last rendered is the 2nd one, but .. the back buffer is a collection of xn pages since the render will not synchronize the rendering speed to the refresh rate

Nowhere on the video does Tom say say that there is more than the 3 specified buffers (Front, Back, Last Rendered) in the Rendered Frames queue, or that the back buffer works this way. In fact, there is no benefit in collecting the rendered frames in the Back Buffer since all of them are outdated. The Last Rendered Buffer holds the most recently completed frame, and that's the one to be shown if the display asks for a new frame.

20 minutes ago, zMeul said:

on it's own, Triple Buffering doesn't drop frames, the developer can, in fact, code it to drop frames when certain conditions are (not) met

Neither Triple Buffering nor Fast Sync doesn't need to "drop frames". The way it works, is that the GPU just re-writes on the more outdated back buffer of the two. Dropping frames, or clearing the buffer isn't necessary at all.

Quote

The problem is that this is an nVidia product and scoring any nVidia product a "zero" is also highly predictive of the number of nVidia products the reviewer will receive for review in the future.

On 2015-01-28 at 5:24 PM, Victorious Secret said:

Only yours, you don't shitpost on the same level that we can, mainly because this thread is finally dead and should be locked.

On 2016-06-07 at 11:25 PM, patrickjp93 said:

I wasn't wrong. It's extremely rare that I am. I provided sources as well. Different devs can disagree. Further, we now have confirmed discrepancy from Twitter about he use of the pre-release 1080 driver in AMD's demo despite the release 1080 driver having been out a week prior.

On 2016-09-10 at 4:32 PM, Hikaru12 said:

You apparently haven't seen his responses to questions on YouTube. He is very condescending and aggressive in his comments with which there is little justification. He acts totally different in his videos. I don't necessarily care for this content style and there is nothing really unique about him or his channel. His endless dick jokes and toilet humor are annoying as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, Shahnewaz said:

Nowhere on the video does Tom say say that there is more than the 3 specified buffers (Front, Back, Last Rendered) in the Rendered Frames queue, or that the back buffer works this way. In fact, there is no benefit in collecting the rendered frames in the Back Buffer since all of them are outdated. The Last Rendered Buffer holds the most recently completed frame, and that's the one to be shown if the display asks for a new frame.

Neither Triple Buffering nor Fast Sync doesn't need to "drop frames". The way it works, is that the GPU just re-writes on the more outdated back buffer of the two. Dropping frames, or clearing the buffer isn't necessary at all.

ok, with FastSync you should replace buffers with "regions", and I'll explain why later

not sure if in the nVidia meeting or in the PCPer stream, Tom explained that the last rendered is a collection of frames the engine will continuously render as the front buffer is scanned on the display

why is that important - frame logic pacing

from my understanding, last rendered will use logic to select a single frame from the collection to sent to the front buffer - it will not be the exactly last rendered frame; why? to prevent  / alleviate choppiness (judder?!) - it's especially important when rendered framerate varies

 

and of course, I could be utterly completely wrong

 

---

 

FastSync would be interesting to test especially at lower FPS than monitor refresh rate, but I don't have a GTX10xx series card

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×