Jump to content

Did Nvidia choose not to go 4k/120hz or they couldn't?

xg32

Personally I'm thinking they chose to milk wallets for 1 extra gen instead of giving us raw horsepower, shoving RT down our throats, I'm just gonna stick with the 1060 at 2k/60 while i wait for the monitors and gpus to mature.

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

NVIDIA posted a performance chart showing the 2080 at 2x the performance the 1080 and as the 1080 can reach 60Hz at 4K(though for most games you would need to tune down the settings ofc) then we might have gotten 4k 120Hz.

 

We will see once benchmarks drop from reviewers, untill then we just know that RTX can make pretty awesome lighting and shadows and the price of the cards

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'd say they were limited by process node. Turing is a jump but it's not as big as we usually get because 12nm doesn't allow for that many more transistors or significantly better power consumption than 16nm Tsmc. 

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, xg32 said:

Personally I'm thinking they chose to milk wallets for 1 extra gen instead of giving us raw horsepower, shoving RT down our throats, I'm just gonna stick with the 1060 at 2k/60 while i wait for the monitors and gpus to mature.

You. I like you I'm on the same page here

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, DocSwag said:

I'd say they were limited by process node. Turing is a jump but it's not as big as we usually get because 12nm doesn't allow for that many more transistors or significantly better power consumption than 16nm Tsmc. 

8B for the 980 Ti vs 12B  for the 1080 Ti vs 18B for the 2080 Ti seems like a good jump.  At some point rendering has to shift away from tricks to actual ray tracing.  

Gaming - AMD TR 3970X | ASUS ROG Zenith Extreme II | G.SKILL Neo 3600 64GB | Zotac Nvidia 2080 Ti AMP | 2x Sabarent 1TB NVMe | Samsung 860 EVO 1TB SSD | Phanteks Enthoo 719 | Seasonic Prime Ultra Platinum 1000w | Corsair K70 RGB Lux | Corsiar M65 | 2x ASUS Rog PG279Q | BenQ EW3270U | Windows 10 Pro | EKWB Custom loop

ITX - Intel i7-10700k | Asus ROG Z490-I Gaming | G.SKILL TridentZ RGB 3200 32GB | EVGA 2080 Super| Samsung 970 Evo 1TB | Samsung 860 Evo 1TB SSD | NZXT H1| Windows 10 Pro

HTPC - Intel i9-9900k | Asus ROG Maximus XI Code | G.SKILL TridentZ RGB 3200 32GB | EVGA 1070 | Samsung 970 1TB | WD Blue 1TB SSD | NZXT H700  | EVGA G3 1000W | Corsair H150i | Windows 10 Pro

Servers - SuperMicro 846 | 2x 2695L V2 | 128GB | Chelsio 10Gbe | Chelsio 40Gbe | 24 x 6TB | FreeNas - SuperMicro 826 | 2 x 2695L | 128GB | Chelsio 10Gbe | Chelsio 40Gbe | 8 x 10TB | 847 24 x 1TB SSD | Windows Server 2019

Work - Dell XPS 15 9560 | i7-7700HQ | 32 GB RAM | 1TB NVMe | 4k dsiplay

Link to comment
Share on other sites

Link to post
Share on other sites

I'm somewhere in-between on this.  If these cards were actually priced the same as last gen, the ray tracing would feel more like a bonus than "you are obligated to have this if you upgrade!". 

If this was just the same nm with a voltage bump, then yeah, I'd totally be one way with this.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Xplo1t said:

8B for the 980 Ti vs 12B  for the 1080 Ti vs 18B for the 2080 Ti seems like a good jump.  At some point rendering has to shift away from tricks to actual ray tracing.  

Yeah but die size also increased a ton. Like 60-70% larger die size for 2080 ti vs 1080 ti. That means much higher costs as well as, most likely, higher power consumption when OCed. Perhaps with Pascal Nvidia could've made a monster GPU that would've been quite a bit faster. With Turing, they're already at the limits and can't go any further due to the transistors not being small enough.

 

While eventually ray tracing will probably be what all games use, as of right now Turing ray tracing performance is not good at all. I believe the 2080 ti was getting 30-60 fps at 1080p in ROTR with ray tracing on or something?

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, DocSwag said:

Yeah but die size also increased a ton. Like 60-70% larger die size for 2080 ti vs 1080 ti. That means much higher costs as well as, most likely, higher power consumption when OCed. Perhaps with Pascal Nvidia could've made a monster GPU that would've been quite a bit faster. With Turing, they're already at the limits and can't go any further due to the transistors not being small enough.

 

While eventually ray tracing will probably be what all games use, as of right now Turing ray tracing performance is not good at all. I believe the 2080 ti was getting 30-60 fps at 1080p in ROTR with ray tracing on or something?

How can they make a larger Pascal on a 16nm node and not hit the limitations like 12nm?  It's like you are arguing in circles.

 

Secondly, do we really need hyperbole based on a single game demo on an unfinished game?  

Gaming - AMD TR 3970X | ASUS ROG Zenith Extreme II | G.SKILL Neo 3600 64GB | Zotac Nvidia 2080 Ti AMP | 2x Sabarent 1TB NVMe | Samsung 860 EVO 1TB SSD | Phanteks Enthoo 719 | Seasonic Prime Ultra Platinum 1000w | Corsair K70 RGB Lux | Corsiar M65 | 2x ASUS Rog PG279Q | BenQ EW3270U | Windows 10 Pro | EKWB Custom loop

ITX - Intel i7-10700k | Asus ROG Z490-I Gaming | G.SKILL TridentZ RGB 3200 32GB | EVGA 2080 Super| Samsung 970 Evo 1TB | Samsung 860 Evo 1TB SSD | NZXT H1| Windows 10 Pro

HTPC - Intel i9-9900k | Asus ROG Maximus XI Code | G.SKILL TridentZ RGB 3200 32GB | EVGA 1070 | Samsung 970 1TB | WD Blue 1TB SSD | NZXT H700  | EVGA G3 1000W | Corsair H150i | Windows 10 Pro

Servers - SuperMicro 846 | 2x 2695L V2 | 128GB | Chelsio 10Gbe | Chelsio 40Gbe | 24 x 6TB | FreeNas - SuperMicro 826 | 2 x 2695L | 128GB | Chelsio 10Gbe | Chelsio 40Gbe | 8 x 10TB | 847 24 x 1TB SSD | Windows Server 2019

Work - Dell XPS 15 9560 | i7-7700HQ | 32 GB RAM | 1TB NVMe | 4k dsiplay

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Xplo1t said:

How can they make a larger Pascal on a 16nm node and not hit the limitations like 12nm?  It's like you are arguing in circles.

Your original post was talking about whether or not Nvidia could have made an even faster GPU. What I was trying to say is they were hitting die size limits with Turing, preventing them from going farther. On Pascal, they didn't max out die size as much as they have on Turing and could've potentially made an even faster Pascal GPU. Perhaps not quite as fast as a 2080 ti, but definitely faster than a 1080 ti.

21 minutes ago, Xplo1t said:

Secondly, do we really need hyperbole based on a single game demo on an unfinished game?  

It's not really an unfinished game as ROTR as been around for a while. I wouldn't be surprised if they haven't had a lot of time to mess with ray tracing though. Fair point, they probably have some more performance they can squeeze out of it. Regardless though, ray tracing seems to have a huge performance impact which makes me a little eh about it in its current state.

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

The raw horsepower in total did improve. The problem is that it can only improve so much and when it comes down to graphics, you can do two things: improve the overall performance of the current image quality or use that power to improve the image quality.

 

NVIDIA wanted to find ways to improve image quality because most customers would rather see improved image quality over straight up performance. Being able to render higher image quality in real time is more of a mark of progress than just being able to generate a bunch of frames per second. Show an average Joe a comparison of Toy Story between the actual movie and Kingdom Hearts III, then show them you getting 200FPS in CS:GO. What do you think they'll be more impressed with?

 

Besides that, frame rate is also tied to CPU performance. It would be pretty awful if NVIDIA or AMD released a card whose only potential can be brought out by using a high-end CPU. But at least with the approach of higher image quality, you can still show for it even if the frame rates are limited by the CPU (though I suppose that depends on how much of a burden the command lists to render the images will cost)

 

Besides that, 4K120FPS doesn't even have enough of a customer base and even if they did go for that approach, and sprinkle some HDR on top of it, the monitors would cost more than the card itself.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, DocSwag said:

I'd say they were limited by process node. Turing is a jump but it's not as big as we usually get because 12nm doesn't allow for that many more transistors or significantly better power consumption than 16nm Tsmc. 

They should have put this sort of thing on a 7nm process from the start. Using the 12nm process when the 7nm process is ready to go is just milking things.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ryao said:

They should have put this sort of thing on a 7nm process from the start. Using the 12nm process when the 7nm process is ready to go is just milking things.

Yeah IMO they should've waited until CES ish time. By then 7nm would be good to go and it should be a pretty big jump over 12nm.

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, M.Yurizaki said:

The raw horsepower in total did improve. The problem is that it can only improve so much and when it comes down to graphics, you can do two things: improve the overall performance of the current image quality or use that power to improve the image quality.

 

NVIDIA wanted to find ways to improve image quality because most customers would rather see improved image quality over straight up performance. Being able to render higher image quality in real time is more of a mark of progress than just being able to generate a bunch of frames per second. Show an average Joe a comparison of Toy Story between the actual movie and Kingdom Hearts III, then show them you getting 200FPS in CS:GO. What do you think they'll be more impressed with?

 

Besides that, frame rate is also tied to CPU performance. It would be pretty awful if NVIDIA or AMD released a card whose only potential can be brought out by using a high-end CPU. But at least with the approach of higher image quality, you can still show for it even if the frame rates are limited by the CPU (though I suppose that depends on how much of a burden the command lists to render the images will cost)

 

Besides that, 4K120FPS doesn't even have enough of a customer base and even if they did go for that approach, and sprinkle some HDR on top of it, the monitors would cost more than the card itself.

These cards are targeted for gamers while the average joe is likely to stick with 1060/1070s on discount, also it is highly likely that most people actually cares about fps more than image quality (just look at the majority of gaming monitors focusing on high refresh rates). We'll see what the review reactions will be like, I think it will be largely negative.

 

RT itself looks very mediocre will have very few support from devs, by the time it matures the next gen will be ready, and we can only push nvidia with our wallets. Those that buy this card is feeding right into the marketing garbage Nvidia tried very hard to spin, the RT tech, the minimal improvement, and the increased price tag all points to a Gimpvidia trying to maximize profits while holding back performance massively. This is the type of stuff they can pull as there is no competition for high-end gaming GPUs.

 

Of course from a business perspective, why not try to milk an extra gen when they can? It's the right move as long as they don't get burned, it just sucks for consumers/gamers.

 

"hey guys here's a bigger Pascal with barely any improvement in 2 years, Ray Tracing btw. Preorder it now!"

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Bananasplit_00 said:

NVIDIA posted a performance chart showing the 2080 at 2x the performance the 1080 and as the 1080 can reach 60Hz at 4K(though for most games you would need to tune down the settings ofc) then we might have gotten 4k 120Hz.

 

We will see once benchmarks drop from reviewers, untill then we just know that RTX can make pretty awesome lighting and shadows and the price of the cards

1080Ti is 30% faster at 4k then 1080. So 1080 is 40s FPS,1080Ti is 50s FPS,2080 is 60s FPS i would guess 2080Ti is 70s

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, xg32 said:

These cards are targeted for gamers while the average joe is likely to stick with 1060/1070s on discount, also it is highly likely that most people actually cares about fps more than image quality (just look at the majority of gaming monitors focusing on high refresh rates). We'll see what the review reactions will be like, I think it will be largely negative.

Taking your argument, I can come up with the conclusion that every AAA game developer is wasting their time and resources by increasing the image quality because most users are just going to lower the settings until they get some high frame rate. We should just revert back to PS3 or PS2 era graphics and stick with GeForce 10 level hardware since they're more than capable of doing the deed of 200+ FPS

 

Eh, no. Sorry. I like my games looking pretty.

 

Quote

RT itself looks very mediocre will have very few support from devs, by the time it matures the next gen will be ready, and we can only push nvidia with our wallets. Those that buy this card is feeding right into the marketing garbage Nvidia tried very hard to spin, the RT tech, the minimal improvement, and the increased price tag all points to a Gimpvidia trying to maximize profits while holding back performance massively. This is the type of stuff they can pull as there is no competition for high-end gaming GPUs.

You know what else NVIDIA said was the future? Hardware transform and lighting. They had the balls to say they invented the term "Graphics Processing Unit" which is something that had hardware T&L capability. 3dfx laughed at them saying that a fast enough CPU would make up for the lack of hardware transform and lighting. Then 3dfx went under and GPUs afterwards went to add transform and lighting capabilities.

 

Then NVIDIA said GPU based compute would be a thing, even porting CUDA over to be used on consumer video cards. I'm sure a lot of people rolled their eyes at the notion. But GPU based compute was added in DirectX 11 and is used in a lot of non-visible calculations.

 

That's about the only examples I can think of, and I'm not saying that ray tracing is guaranteed to be a thing, but NVIDIA has pushed a lot of technologies that have been standardized in some form since. And to immediately dismiss something that even game developers have been wanting for a long time without fully understanding the technology or its possible uses is ignorant at best. I admit to thinking that ray tracing was silly in and of itself to pursue, but the more I think about it, the more I realize that it may have other uses outside of just making better transparent and shiny things.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, M.Yurizaki said:

Taking your argument, I can come up with the conclusion that every AAA game developer is wasting their time and resources by increasing the image quality because most users are just going to lower the settings until they get some high frame rate. We should just revert back to PS3 or PS2 era graphics and stick with GeForce 10 level hardware since they're more than capable of doing the deed of 200+ FPS

 

Eh, no. Sorry. I like my games looking pretty.

I had plans to buy the 4k/120 hdr asus monitor already, was planning on buying it with the 2080 ti launch. I've gamed on color accurate monitors for 10 years and now it looks like the 2080 ti is barely an improvement as the focus was on RT/marketing. I like my games looking good too (been waiting for a monitor with 120hz/hdr and high LUT for a decade) but image quality has always been a minority market.

 

I'm not saying Ray Tracing will fail, but I think gamers who buys the 20xx series at their current price point, fps, RT support is getting shafted, while those of us who just wanted more performance gets almost nothing (just gimme a damn card that runs 4k/120), clearly some of it is  a frustration on a personal level, but they are still using a gen to sell new tech without real performance gains, I can't really blame them on a business level as 1080 ti is still king  (without tapping the Xp)

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, DocSwag said:

I'd say they were limited by process node. Turing is a jump but it's not as big as we usually get because 12nm doesn't allow for that many more transistors or significantly better power consumption than 16nm Tsmc. 

TSMC 12nm FFN is actually the same TSMC 16nm FinFET process used on Pascal with a larger recital size to allow for making the huge V100 dies, and some other minor enhancements. It's 16nm FinFET+ without calling it that and using 12nm because it sounds better.

 

12nm FFC is actually denser than 16nm FinFET but that's not what is being used on Volta or Turing.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, xg32 said:

These cards are targeted for gamers while the average joe is likely to stick with 1060/1070s on discount, also it is highly likely that most people actually cares about fps more than image quality (just look at the majority of gaming monitors focusing on high refresh rates).

Most people have monitors with 60Hz refresh rate, the install base of high refresh rate monitors is extremely small. I don't have any real figures but I wouldn't be surprised if it's in the range of 0.1% to 1% of monitors.

 

Gaming monitors are marketing high refresh rate now because of multiple factors, mostly because they need something/anything to market to sell products.

 

We've been rather stagnant for a while now in improving image quality in games, the noticeable and generational improvement kind, because that next generational leap is just so vast we do not have the GPU computational capacity to reach it. We're really not all that close so we have to fall back to other things like advanced special effects, physics improvements and the obvious, talking up frame rates.

 

Computer graphics over the longer wider view has always been driven by the narrative of imagine quality improvements and the march towards realism, when you falter at the pace you need to champion that narrative you just find a new one which is currently high refresh rate/high FPS. It's pretty well demonstrated funnily enough by consoles (warning Sony/PS user here), just track the image quality over the Atari to NES to SNES to PS1 to PS2 to PS3 and now PS4. The last step really is not the same generational image quality improvement as the steps before, same applies to PC but just use GPU generations which is much more fluid so not as easy.

 

As a very avid Final Fantasy player I also watch the same improvement there, as the games moved to the newer generation hardware the graphics quality improvement was always truly amazing and invoked that wow factor, FF13 to FF15 was more "Eh? I mean it does look better but I'd be happy with what I had before". 

 

Edit:

Also most gamers have 1060 class, average Joe has what ever the PC came with and typically doesn't know what that is. Our viewer/vantage point bias makes us think more people have higher end GPUs than is actually the case.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

TSMC 12nm FFN is actually the same TSMC 16nm FinFET process used on Pascal with a larger recital size to allow for making the huge V100 dies, and some other minor enhancements. It's 16nm FinFET+ without calling it that and using 12nm because it sounds better.

 

12nm FFC is actually denser than 16nm FinFET but that's not what is being used on Volta or Turing.

That explains a lot, such as the lack of large density or clock speed improvements.... I wonder why they chose not to use 12nm FFC.... Yields and the design already being made for FFN perhaps?

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

.

While i agree with most of what you said, the gaming monitor userbase has to be higher than 1%. To the point of RT, it's likely the tech just isn't ready to run at 60fps at all. I can see how first gen RT is necessary but depending on the benchmarks, the first gen users are likely beta-testers, and they are charged a premium for it. 

 

As for the point about the average joe, the cards are priced out of that range. Which also reinforces the point they are using gamers as beta testers instead of giving us 4k/120hz (i believe they can do it but from a business standpoint milking it makes more sense), and it's probably necessary to give RT its trial run, I'm just not too happy about it. The image quality improvement won't actually hit mainstream til 1-2gens down the road.

5950x 1.33v 5.05 4.5 88C 195w ll R20 12k ll drp4 ll x570 dark hero ll gskill 4x8gb 3666 14-14-14-32-320-24-2T (zen trfc)  1.45v 45C 1.15v soc ll 6950xt gaming x trio 325w 60C ll samsung 970 500gb nvme os ll sandisk 4tb ssd ll 6x nf12/14 ippc fans ll tt gt10 case ll evga g2 1300w ll w10 pro ll 34GN850B ll AW3423DW

 

9900k 1.36v 5.1avx 4.9ring 85C 195w (daily) 1.02v 4.3ghz 80w 50C R20 temps score=5500 ll D15 ll Z390 taichi ult 1.60 bios ll gskill 4x8gb 14-14-14-30-280-20 ddr3666bdie 1.45v 45C 1.22sa/1.18 io  ll EVGA 30 non90 tie ftw3 1920//10000 0.85v 300w 71C ll  6x nf14 ippc 2000rpm ll 500gb nvme 970 evo ll l sandisk 4tb sata ssd +4tb exssd backup ll 2x 500gb samsung 970 evo raid 0 llCorsair graphite 780T ll EVGA P2 1200w ll w10p ll NEC PA241w ll pa32ucg-k

 

prebuilt 5800 stock ll 2x8gb ddr4 cl17 3466 ll oem 3080 0.85v 1890//10000 290w 74C ll 27gl850b ll pa272w ll w11

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, xg32 said:

While i agree with most of what you said, the gaming monitor userbase has to be higher than 1%. To the point of RT, it's likely the tech just isn't ready to run at 60fps at all. I can see how first gen RT is necessary but depending on the benchmarks, the first gen users are likely beta-testers, and they are charged a premium for it. 

 

As for the point about the average joe, the cards are priced out of that range. Which also reinforces the point they are using gamers as beta testers instead of giving us 4k/120hz (i believe they can do it but from a business standpoint milking it makes more sense), and it's probably necessary to give RT its trial run, I'm just not too happy about it. The image quality improvement won't actually hit mainstream til 1-2gens down the road.

Well I mean 1% of all monitors, not people who primarily play games. But the majority of people I know have 5+ year old 1080p monitor and would rather spend the money on a better GPU or CPU etc. Few people actually bother even upgrading their GPU every 2nd generation, I haven't because games haven't literally forced me to and there is nothing truly compelling I'm missing out on.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, DocSwag said:

That explains a lot, such as the lack of large density or clock speed improvements.... I wonder why they chose not to use 12nm FFC.... Yields and the design already being made for FFN perhaps?

I think it's because 12nm FFC is optimized for low power mobile.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×