Jump to content

There's really no point in getting a gtx 1060

c00face
34 minutes ago, IGJoe2192 said:

Oh but it very much is, there seems to be a good amount of people running RX480 CF. 

Only those that are brand loyal. Ask any of us that have used crossfire in the past few years, its not an overall great experience. AMD will bolster up crossfire support for a new hardware launch, and then a year later you will find that new games don't have support, and driver support is often weeks or months late for crossfire, and still riddled with flickering or frametime issues. There's a certain "cool" factor about having dual graphic cards in a system, and the inner geek in some of us wants to experience it first hand, but it ain't all its cracked up to be. SLI is not much different either when it comes to the downsides. DX12 multi-adapter has great potential, but if that's what people are hoping for to fix the crossfire problems they should understand something: Nvidia won't need an SLI bridge to do DX12 Multi-adapter, so the argument doesn't weight favorably for current multi-GPU options, especially pairing mid-range cards together for the same price as the next tier up (gtx 1070 which performs 50-60% faster than single 480 on average), and at double the power draw of the next tier up.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 15/07/2016 at 5:28 PM, Misanthrope said:

I kind of agree: the choice should clearly be 480 then 1070. There's a huge gap in there but there's also a deep performance gap between 1080p and even 1440p.

980 was never a 1440p card, infact my 980ti was barely one. 

 

Until a card can run a resolution with almost any game out there maxed out and over 60fps it is not a card for that resolution.

 

 

7800x3d - RTX 4090 FE - 64GB-6000C30 - 2x2TB 990 Pro - 4K 144HZ

PCPP: https://uk.pcpartpicker.com/list/mdRcqR

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Supermangik said:

No, I'm not.

Then why the hate? You realize AMD markets hate towards Nvidia? That's where it comes from. AMD has superior hardware but lacks the resources to properly utilize it. Nvidia marketed hate towards AMD, not that their significantly more successful they can't because it would paint a negative image of them. You see we're hypocrites you and I. Everyone actually. We say we're for the little guy yet shop at Wal Mart to save a buck. But we'll get upset if Wal Mart picks on the small business owner because it reminds us of what we're doing. So it's OK for AMD to market hate, not Intel or Nvidia.

 

Don't spread marketing for companies, please. Make them pay for marketing.

 

1 hour ago, iiNNeX said:

980 was never a 1440p card, infact my 980ti was barely one. 

 

Until a card can run a resolution with almost any game out there maxed out and over 60fps it is not a card for that resolution.

 

 

You sir have the worse 980ti in history because my 980ti busts a nut all over anything at 1440. Not one of my games even dips below 60fps, lowest I see is in the 70s in demanding console ports.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Briggsy said:

Only those that are brand loyal. Ask any of us that have used crossfire in the past few years, its not an overall great experience. AMD will bolster up crossfire support for a new hardware launch, and then a year later you will find that new games don't have support, and driver support is often weeks or months late for crossfire, and still riddled with flickering or frametime issues. There's a certain "cool" factor about having dual graphic cards in a system, and the inner geek in some of us wants to experience it first hand, but it ain't all its cracked up to be. SLI is not much different either when it comes to the downsides. DX12 multi-adapter has great potential, but if that's what people are hoping for to fix the crossfire problems they should understand something: Nvidia won't need an SLI bridge to do DX12 Multi-adapter, so the argument doesn't weight favorably for current multi-GPU options, especially pairing mid-range cards together for the same price as the next tier up (gtx 1070 which performs 50-60% faster than single 480 on average), and at double the power draw of the next tier up.

You are wasting your breath, I have had crossfire setups since the HD3xxx series. HD3870's, HD5970, HD7970Ghz and now RX480. Support was best with the 3000 series when the technology was new and I will admit it has tappered off over the years. I do however think we will see better support with the new API's. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, App4that said:

You sir have the worse 980ti in history because my 980ti busts a nut all over anything at 1440. Not one of my games even dips below 60fps, lowest I see is in the 70s in demanding console ports.

I ran 2 980ti Hybrids both at nearly 1500mhz. Not sure what games you play but when I played a game where SLI wasn't supported well, it didn't stay in the 60s the entire time.

 

Also I max everything out, and AA is usually between 2x or 4x MSAA. Witcher 3 with all settings on will show you what I mean.

7800x3d - RTX 4090 FE - 64GB-6000C30 - 2x2TB 990 Pro - 4K 144HZ

PCPP: https://uk.pcpartpicker.com/list/mdRcqR

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, iiNNeX said:

I ran 2 980ti Hybrids both at nearly 1500mhz. Not sure what games you play but when I played a game where SLI wasn't supported well, it didn't stay in the 60s the entire time.

 

Also I max everything out, and AA is usually between 2x or 4x MSAA. Witcher 3 with all settings on will show you what I mean.

I play Witcher 3, no problem staying in the 70's at 1366. Heavy AA is for lower resolutions, have you tried lowering the AA and seeing if you can even tell a difference visually? Because you will fps wise ;) 

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, App4that said:

I play Witcher 3, no problem staying in the 70's at 1366. Heavy AA is for lower resolutions, have you tried lowering the AA and seeing if you can even tell a difference visually? Because you will fps wise ;) 

I have yes, and even at 4K at least 2x AA is required. I seem to notice AA a lot more than other people though so it might be that...

 

I hope with the 1080s I can reach 165hz in most games at 1440p, as with the 980ti's it didn't happen as often as I thought it would. I mean in games like BF4, Overwatch and stuff like that it did, on max settings and max AA even which is great, but playing with ULMB is so nice and you need constant 120+hz for that to work properly.

7800x3d - RTX 4090 FE - 64GB-6000C30 - 2x2TB 990 Pro - 4K 144HZ

PCPP: https://uk.pcpartpicker.com/list/mdRcqR

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, iiNNeX said:

I have yes, and even at 4K at least 2x AA is required. I seem to notice AA a lot more than other people though so it might be that...

 

I hope with the 1080s I can reach 165hz in most games at 1440p, as with the 980ti's it didn't happen as often as I thought it would. I mean in games like BF4, Overwatch and stuff like that it did, on max settings and max AA even which is great, but playing with ULMB is so nice and you need constant 120+hz for that to work properly.

Hey if you notice it you notice it. It's your experience. But I'd argue that anything over 80fps in non shooters is on the edge of a waste. At that point you're just after a number rather than a difference in the experience. But in shooters I totally agree.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

Wait. Why the heck are you saying this when the 1060 isn't even out yet and its pricing model hasn't really been released? I was actually thinking RIP RX 480 if the 1060 can offer better performance and the full 8GB VRAM for $20-30 more. 

Current PC: Origin Millennium- i7 5820K @4.0GHz | GTX 980Ti SLI | X99 Deluxe 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, afyeung said:

Wait. Why the heck are you saying this when the 1060 isn't even out yet and its pricing model hasn't really been released? I was actually thinking RIP RX 480 if the 1060 can offer better performance and the full 8GB VRAM for $20-30 more. 

Only if the driver overhead in DX12/Vulkan is not taken into account. Check out the recent DOOM benches. 480 got a massive boost of over 20%

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

1060 is considerably faster than RX 480... and Nvidia gains from DX12 & Vulkan too, so moot point is moot.

E7-8890 V3 x2 , 12TB DDR4 @ 1866, Quadro K6000's + 2x GTX 1080's.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, don_svetlio said:

Only if the driver overhead in DX12/Vulkan is not taken into account. Check out the recent DOOM benches. 480 got a massive boost of over 20%

Yeah I saw that. Extremely impressed with the results. But DX12/Vulkan titles are still in the minority.

Current PC: Origin Millennium- i7 5820K @4.0GHz | GTX 980Ti SLI | X99 Deluxe 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Tom Hanks said:

1060 is considerably faster than RX 480... and Nvidia gains from DX12 & Vulkan too, so moot point is moot.

Do you have a crystal ball? No? Then stop predicting the future.

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, afyeung said:

Yeah I saw that. Extremely impressed with the results. But DX12/Vulkan titles are still in the minority.

Yet that's the future - Hardware-based A-sync being superior to software-based A-sync

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, don_svetlio said:

Do you have a crystal ball? No? Then stop predicting the future.

If he has a crystal ball then it's definitely a gimpworks crystal ball lol. Nvidia sees nowhere near the same benefit in Vulkan as the AMD cards.

Current PC: Origin Millennium- i7 5820K @4.0GHz | GTX 980Ti SLI | X99 Deluxe 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Tom Hanks said:

WCCF? Really? That the best you can do? GPUBoss didn't have an article ready?

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Tom Hanks said:

Again - any REAL info?

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, afyeung said:

Nvidia sees nowhere near the same benefit in Vulkan as the AMD cards.

Those gains are mostly from AMD's latest Shader intrinsic functions (not dependent on any API), and reduced driver overhead of Vulkan.  The vulkan/DX12 discussion is a turd sandwich, because the more competent the game developers are, the more performance they can squeeze out of each architecture in older API's. In the case of Doom, I'd be willing to bet that async compute adds little benefit to AMD and Nvidia, and is probably a red herring in the discussion. While id devs were able to squeeze every bit of performance out of Pascal from the get go in openGL, it required a lot of extra work to reduce AMD's driver overhead (by using Vulkan), and to add AMD's shader intrinsic functions. id developers are some of the best out there, which brings into question just how much harder these benefits are to implement for the average game developer. We'll probably know that answer in a year or two.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Briggsy said:

Those gains are mostly from AMD's latest Shader intrinsic functions (not dependent on any API), and reduced driver overhead of Vulkan.  The vulkan/DX12 discussion is a turd sandwich, because the more competent the game developers are, the more performance they can squeeze out of each architecture. In the case of Doom, I'd be willing to bet that async compute adds little benefit to AMD and Nvidia, and is probably a red herring in the discussion. While id devs were able to squeeze every bit of performance out of Pascal from the get go, it required a lot of extra work to reduce AMD's driver overhead (by using Vulkan), and to add AMD's shader intrinsic functions. id developers are some of the best out there, which brings into question just how much harder these benefits are to implement for the average game developer. We'll probably know that answer in a year or two.

In this case it's the opposite - due to the native support on GCN utilizing everything in Vulkan is extremely easy but Nvidia's software solution introduces a lot of driver overhead which has to be manually removed.

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, don_svetlio said:

In this case it's the opposite - due to the native support on GCN utilizing everything in Vulkan is extremely easy but Nvidia's software solution introduces a lot of driver overhead which has to be manually removed.

source?

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Briggsy said:

source?

The hardware itself. GCN has native hardware support for everything with a beefy scheduler that runs all 64 pipelines without the need of software. Nvidia are still using a single pipeline mostly and utilizing some tricks with Pascal to only flush parts of it rather than the whole thing (as with Maxwell)

Archangel (Desktop) CPU: i5 4590 GPU:Asus R9 280  3GB RAM:HyperX Beast 2x4GBPSU:SeaSonic S12G 750W Mobo:GA-H97m-HD3 Case:CM Silencio 650 Storage:1 TB WD Red
Celestial (Laptop 1) CPU:i7 4720HQ GPU:GTX 860M 4GB RAM:2x4GB SK Hynix DDR3Storage: 250GB 850 EVO Model:Lenovo Y50-70
Seraph (Laptop 2) CPU:i7 6700HQ GPU:GTX 970M 3GB RAM:2x8GB DDR4Storage: 256GB Samsung 951 + 1TB Toshiba HDD Model:Asus GL502VT

Windows 10 is now MSX! - http://linustechtips.com/main/topic/440190-can-we-start-calling-windows-10/page-6

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Briggsy said:

source?

They don't have one because the only people who could have said this is Nvidia or one of the companies developing on it like ID and they definitely haven't done so as it would be enormous news. Its a lie and no source actual useful source is coming beyond some AMD fanboy youtube channel saying it without proof.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, don_svetlio said:

The hardware itself. GCN has native hardware support for everything with a beefy scheduler that runs all 64 pipelines without the need of software. Nvidia are still using a single pipeline mostly and utilizing some tricks with Pascal to only flush parts of it rather than the whole thing (as with Maxwell)

The 480 has 4 ACE's, which means up to 32 compute (4x8) in Queue + one graphic, to enter the single graphic pipeline in GCN.

 

The issue with Maxwell was context switching. AMD have multiple ACE's depending on which version of GCN, with each one focusing on a different "compute context," which gives GCN performance gains when using async compute to fill up the pipeline. Pascal can do context switching in as little as 100 microseconds, storing current data in a buffer (all done in hardware) in order to do a context switch. The biggest benefit is async time warp in VR, but it also has the added bonus of concurrency, which Maxwell doesn't achieve. While GCN fills the pipeline asynchronously between the ACE queue's and graphic command processor (480 has two graphic processors filling a single pipeline), Pascal keeps the pipeline full through driver scheduling and hardware context switching at the thread level, switching fast enough that few gaps are left in the pipeline, and even fewer with async compute enabled.  Both Nvidia and AMD have different approaches to the same end goal. Both companies are using tricks to keep the pipeline full, one of them is just doing it more efficiently at this time.

R9 3900XT | Tomahawk B550 | Ventus OC RTX 3090 | Photon 1050W | 32GB DDR4 | TUF GT501 Case | Vizio 4K 50'' HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×