Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
BiG StroOnZ

NVIDIA GeForce 3070/3080/3080 Ti (Ampere): More VRAM, Way Faster, & Cheaper! (Update 5 ~ Alleged GDC / March Reveal & Specs)

Recommended Posts

I don't care about power saving. I can hook up 4x 8pin if needed, if that gives me 3x GTX 1080Ti performance lol. You don't buy Ferrari with 5 liter V12 engine to save petrol... The same way I don't buy graphic cards to save power. And those who do, well, those can still buy a Renault Twingo or a GTX 1550 Mini Lite...

Link to post
Share on other sites
9 hours ago, pas008 said:

I'm thinking its the typical thing we always see with these articles over exaggeration and mixing both together

Be nice for more

 

Realistically my understanding

Same power

50% plus increase in graphics

Same graphics

50% plus decrease in power

This immediately reminded me of what AMD were saying before 7nm launch. You can have EITHER:

  • Same performance at half the power
  • >25% more performance at the same power

performance vs power is a curve and you can choose where you operate along it. For gaming parts, it tends to be higher up the curve. You get slightly higher performance for the extra power cost. Lower power optimised ones are only a little slower, but use noticeably less power.

 

8 hours ago, Waffles13 said:

The only argument I can see is if Nvidia is artificially capping your potential to overclock beyond the "safe" bounds for the card, beyond anything that they could sell en masse. And while they do do that to a large extent currently with regards to boost and power limits, I don't see any reason for them to change that just because they theoretically leave more headroom on the table. 

nvidia cards generally already limit you to a certain power limit. Extreme overclockers have to bypass that. Even without this, AIB makers already push the limits of stability with their factory overclocks. Right next to me I have an EVGA 970. It's gaming stable, but compute unstable. I have to un-overclock it if I want to run anything other than gaming on it.

 

6 hours ago, rrubberr said:

People bash Vega alot because it's not a gaming part, but when you get down to it GCN has never really been about gaming (which is AMD's mistake, no doubt). For very specific use cases, like my own, Vega makes tons of sense. I use a Vega VII for LuxCoreRender, an OpenCL accelerated render engine (which I use for virtually all my 3D graphics work), and get 3-4 1080s worth of performance out of it. This is one example where HBM2 is a huge bonus, and having 16GB of VRAM is a must, which is something Nvidia won't give you for the $750 price tag the card wound up selling for.

Radeon VII is a kinda odd card for AMD to release and I think they used it to keep interest going in their consumer GPU side. It is basically a crippled workstation/compute card. I was almost interested but they still crippled the fp64 a bit more than I had use for. I know others who have bought multiples since it is still the highest fp64/cost card recently available.

 

6 hours ago, rrubberr said:

I haven't used an Nvidia card for a year or so, but last time I did, they were still shipping OpenCL 1.2 in their drivers, again 2.0 is a must for many applications.

While not a significant area for me to look at, I think a lot of nvidia optimised code uses CUDA in preference. Also the talk about broken OpenCL in current Navi (on Windows at least) is concerning.

 

6 hours ago, leadeater said:

At some point I expect architectures to diverge and AMD will have a gaming optimized one and the compute optimized

This could be a shift like when fp64 got crippled in consumer cards, starting on the 600 series on green, and the last "fast" fp64 red card was the 280X until the VII came around. But it is more complicated now, as GPU compute is making its way into more software. Which bits can they realistically cut without impacting consumers?

 

3 hours ago, RejZoR said:

I don't care about power saving. I can hook up 4x 8pin if needed, if that gives me 3x GTX 1080Ti performance lol. You don't buy Ferrari with 5 liter V12 engine to save petrol... The same way I don't buy graphic cards to save power. And those who do, well, those can still buy a Renault Twingo or a GTX 1550 Mini Lite...

whynotboth.gif

 

Power efficient cards could have their place at certain parts of the range, especially the lower end. Flagship cards can afford to push the electrons around more generously.

 

While I've not looked at it for gaming, there is a significant power saving in compute uses with Turing over Pascal. My 2070 is keeping up with my 1080 Ti, at lower power. That is only due to architecture since they're on same process. A process improvement combined with possible further improvements to architecture, there's room for efficiency to improve further.


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance LPX RGB 3000 2x8GB, Gigabyte RTX 2070, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 3600, Noctua D9L, G.SKill TridentZ 3000C14 2x8GB, Gigabyte RTX 2070, Corsair CX450M, NZXT Manta, WD Green 240GB SSD, LG OLED55B9PLA

VR rig: Asus Z170I Pro Gaming, i7-6700T stock, Scythe Kozuti, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, Crucial BX500 1TB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Total CPU heating: i7-8086k, i3-8350k, i7-7920X, 2x i7-6700k, i7-6700T, i5-6600k, i3-6100, i7-5930k, i7-5820k, i7-5775C, i5-5675C, 2x i7-4590, i5-4570S, 2x i3-4150T, E5-2683v3, 2x E5-2650, E5-2667, R7 3700X, R5 3600, R5 2600, R7 1700

Link to post
Share on other sites
52 minutes ago, porina said:

Which bits can they realistically cut without impacting consumers?

Realistically I think most of the differences will be in the front end how you allocate work to the GPU in the rules in which you can, processing elements should still be common on both designs but maybe with a slightly different ratio. That's the biggest limiter/optimization issue with GCN, fitting in perfectly with all the requirements to fully utilize the die otherwise you could have up to 50% of the stream processors in a CU idle.

 

It's a lot easier to achieve peak performance when you remove flexibility which adds predictability, and with predictability you can optimize and refine. Nvidia got lots of flak for not really having Async Compute but if you think about it if something isn't possible then you don't have to accommodate it which means no die area or architecture dedicated to make it work. Since something like Async Compute is such a minor used thing it's a burden to have not a positive when it comes to efficiency and optimization to that which is used. And that is pretty much the story of GCN, feature rich highly flexible design with excellent performance at the cost of shouldering large responsibility of that on developers. 

Link to post
Share on other sites
7 hours ago, porina said:

nvidia cards generally already limit you to a certain power limit. Extreme overclockers have to bypass that. Even without this, AIB makers already push the limits of stability with their factory overclocks. Right next to me I have an EVGA 970. It's gaming stable, but compute unstable. I have to un-overclock it if I want to run anything other than gaming on it.

I know that, my point was that if Nvidia decides that the 3080ti is going to be a 200W card for some reason, it's not like they are going to loosen up their lockdown of the card firmware out of some desire to make things more fun for overclockers. I'd much rather they take the extra power budget afforded by 7nm and use it to cram more performance into all product tiers. Then if someone needs a SFF card, they just go down a tier or two and still get the same performance without arbitrarily capping the high end, maximum performance focused market. 

Link to post
Share on other sites

I'm hoping they are going to be released this year I'm saving to replace this 1080 ti.


I7 9700k- MSI Z390 Gaming Edge AC - Be Quiet Dark Pro 4 - Corsair Vengeance LPX 3200 2X8GB - EVGA GTX 1080TI SC 2 - WD BLACK NVMe M.2 500GB - WD Blue M.2 Sata III 2TB - EVGA Supernova 650 P2, 80+ Platinum -Nzxt H500 - Display Alienware AW3418DW - Keyboard Logitech G513  - Mouse Logitech G502 - Headset  Astro A50 2019 Edition - Logitech Z623

Link to post
Share on other sites

Yeah.. I don't know why they would be cheaper.. Nvidia has zero incentive to make their cards cheaper, as long as AMD does not offer anything high end.

 

At least AMD can compete in the midrange, 5700XT is a decent performer. But the 3080Ti will likely still be SUPER expensive.

Link to post
Share on other sites
Posted · Original PosterOP

Looks like we can keep on rolling with the Ampere updates/rumors:

 

Quote

588668583_ampeecopy.thumb.jpg.56e966b5eb08c9178bd45bc004267d7d.jpg

 

 

ampereee.thumb.jpg.f012c09a2b6d9b61ac6d6e0d920b5cb7.jpg

 

aBiHyo5cnREvuqV5.jpg.8b726b1d51da230418c43816a54456cd.jpg

 

TkrJGx4zxDHGhG6X.jpg.608db05127daeb04acb4344f31e0e764.jpg

 

Alleged specifications of the GeForce RTX 3070 and RTX 3080 have surfaced. Of course, you'll need to take a ton disclaimers in mind, and huge grains of salt. But history has proven over and over again, that there often is validity (at least to some degree) to be found in these leaks. So here we go:

 

For starters the two dies which have appeared have codenames like GA103 and GA104, standing for RTX 3080 and RTX 3070 respectively. Perhaps the biggest surprise is the Streaming Multiprocessor (SM) count. The smaller GA104 die has as much as 48 SMs, resulting in 3072 CUDA cores, while the bigger, oddly named, GA103 die has as much as 60 SMs that result in 3840 CUDA cores in total. These improvements in SM count should result in a notable performance increase across the board. Alongside the increase in SM count, there is also a new memory bus width. The smaller GA104 die that should end up in RTX 3070 uses a 256-bit memory bus allowing for 8/16 GB of GDDR6 memory, while its bigger brother, the GA103, has a 320-bit wide bus that allows the card to be configured with either 10 or 20 GB of GDDR6 memory. 

 

The original source also shared die diagrams for GA103 and GA104, they look professional but they are not as detailed as Turing diagrams, hence we put a strong doubt on their credibility. 

 

Rumors are that at GDC in March we'll see the first announcements on Ampere architecture (if it'll be called Ampere). 

 

 

 

Source: https://videocardz.com/newz/rumor-first-nvidia-ampere-geforce-rtx-3080-and-rtx-3070-specs-surface

Source: https://www.guru3d.com/news-story/rumor-nvidia-ampere-geforce-rtx-3070-and-rtx-3080-specs-surface.html 

Source: https://www.techpowerup.com/263128/rumor-nvidias-next-generation-geforce-rtx-3080-and-rtx-3070-ampere-graphics-cards-detailed


                                                                                                                              .:. Y Gwir Yn Erbyn Y Byd ! .:.

                                                                                                                                     ] Vittoria, o moriamo tutti ! [

                                                         

Spoiler

                                                                            How to free up space on your SSD                                                          

Spoiler
Spoiler

                                                                                          Kymatica Revision:

 

CPU: Intel Core i7-2600k @ 4.3GHz Motherboard: ASRock Z68 Extreme4 Gen3 GPU: Gigabyte GeForce GTX 1660 Ti ~ TU116-400-A1 ~ OC 6G 2x Windforce Memory: G.Skill Ripjaws X Series 16GB @ 2133MHz @ 9-10-11-28 SSD: Crucial M500 240GB (OS/Programs/Path of Exile/Grim Dawn) HDD1: WD 1TB Blue (Diablo III/Other Games/Storage/Media) HDD2: Seagate Barracuda 7.2K 500GB (Backup) HDD3: WD Caviar 7.2K 400GB (Backup) HDD4: WD Elements 4TB External WDBWLG0040HBK-NESN (Backup/Additional Storage)  CPU Cooling: Corsair Hydro Series H100 in Pull (w/ 2x Delta FFB1212EH 120mm) Case Fans: Noctua NF F12 industrialPPC-2000 (x3 120mm) PSU: Seasonic X-Series X-1050 1050W Case: Cooler Master HAF 922 Monitor: Samsung C27F396 Curved 27-Inch Freesync Monitor (@ 1440p @ 72Hz) Keyboard: Cooler Master Storm Trigger-Z (Cherry MX Brown Switches) Mouse: Roccat Kone XTD Mousepad: Corsair MM350 Premium Audio: Logitech X-530 5.1 Speaker System Headset: Corsair VOID Stereo Gaming Headset (w/ Sennheiser 3D G4ME 7.1 Surround Amplifier) OS: Windows 10 Professional (Version 1903 OS Build 18362.592)

 

                                                                                                       

Link to post
Share on other sites

Big if true, as with all things related to tech rumors. Again, no matter how much more performance Nvidia is going to push out compared to Turing, pricing is still going to matter, especially for the mid to lower end cards. I don't care if the RTX-3080 is going to be as fast as the RTX-2080ti, if its the same price, its still pretty sad.

Link to post
Share on other sites

Cheaper? rip AMD
Significantly better? rip AMD

If either of those are true, rip AMD.


CPU - i5 6600k | RAM - 16 GB DDR4 2133MHz | GPU - MSI Gaming X RX470 4GB | MOBO -  Asus z170-P

Link to post
Share on other sites
Posted · Original PosterOP
10 minutes ago, PocketNerd said:

Cheaper? rip AMD
Significantly better? rip AMD

If either of those are true, rip AMD.

 

 


                                                                                                                              .:. Y Gwir Yn Erbyn Y Byd ! .:.

                                                                                                                                     ] Vittoria, o moriamo tutti ! [

                                                         

Spoiler

                                                                            How to free up space on your SSD                                                          

Spoiler
Spoiler

                                                                                          Kymatica Revision:

 

CPU: Intel Core i7-2600k @ 4.3GHz Motherboard: ASRock Z68 Extreme4 Gen3 GPU: Gigabyte GeForce GTX 1660 Ti ~ TU116-400-A1 ~ OC 6G 2x Windforce Memory: G.Skill Ripjaws X Series 16GB @ 2133MHz @ 9-10-11-28 SSD: Crucial M500 240GB (OS/Programs/Path of Exile/Grim Dawn) HDD1: WD 1TB Blue (Diablo III/Other Games/Storage/Media) HDD2: Seagate Barracuda 7.2K 500GB (Backup) HDD3: WD Caviar 7.2K 400GB (Backup) HDD4: WD Elements 4TB External WDBWLG0040HBK-NESN (Backup/Additional Storage)  CPU Cooling: Corsair Hydro Series H100 in Pull (w/ 2x Delta FFB1212EH 120mm) Case Fans: Noctua NF F12 industrialPPC-2000 (x3 120mm) PSU: Seasonic X-Series X-1050 1050W Case: Cooler Master HAF 922 Monitor: Samsung C27F396 Curved 27-Inch Freesync Monitor (@ 1440p @ 72Hz) Keyboard: Cooler Master Storm Trigger-Z (Cherry MX Brown Switches) Mouse: Roccat Kone XTD Mousepad: Corsair MM350 Premium Audio: Logitech X-530 5.1 Speaker System Headset: Corsair VOID Stereo Gaming Headset (w/ Sennheiser 3D G4ME 7.1 Surround Amplifier) OS: Windows 10 Professional (Version 1903 OS Build 18362.592)

 

                                                                                                       

Link to post
Share on other sites
14 minutes ago, PocketNerd said:

Cheaper? rip AMD
Significantly better? rip AMD

If either of those are true, rip AMD.

It depends on how NVIDIA is going to position their midrange offerings where most people care about.

 

Sure people make noise at the high-end, but I don't think that's where the market is.

Link to post
Share on other sites

NVIDIA is not going to make anything cheaper. They were selling RTX cards for 1200€ and they were selling. Unless AMD smacks them in the face out of the blue with RX 5900 XT that beats RTX 2080Ti by 30% or more, I'm not seeing that scenario happening anytime soon.

Link to post
Share on other sites
29 minutes ago, RejZoR said:

NVIDIA is not going to make anything cheaper. They were selling RTX cards for 1200€ and they were selling. Unless AMD smacks them in the face out of the blue with RX 5900 XT that beats RTX 2080Ti by 30% or more, I'm not seeing that scenario happening anytime soon.

Pricing will adjust to performance, with the bias for now that Intel and nvidia can generally charge more for the same performance than AMD due to better market recognition. We're seeing a shift in Intel, but AMD has to provide a competitive flagship GPU if we're going to see any significant downward price movement from nvidia.


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, Corsair Vengeance LPX RGB 3000 2x8GB, Gigabyte RTX 2070, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 3600, Noctua D9L, G.SKill TridentZ 3000C14 2x8GB, Gigabyte RTX 2070, Corsair CX450M, NZXT Manta, WD Green 240GB SSD, LG OLED55B9PLA

VR rig: Asus Z170I Pro Gaming, i7-6700T stock, Scythe Kozuti, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, Crucial BX500 1TB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB + 480GB SSD

Total CPU heating: i7-8086k, i3-8350k, i7-7920X, 2x i7-6700k, i7-6700T, i5-6600k, i3-6100, i7-5930k, i7-5820k, i7-5775C, i5-5675C, 2x i7-4590, i5-4570S, 2x i3-4150T, E5-2683v3, 2x E5-2650, E5-2667, R7 3700X, R5 3600, R5 2600, R7 1700

Link to post
Share on other sites
Just now, porina said:

Pricing will adjust to performance, with the bias for now that Intel and nvidia can generally charge more for the same performance than AMD due to better market recognition. We're seeing a shift in Intel, but AMD has to provide a competitive flagship GPU if we're going to see any significant downward price movement from nvidia.

Though at the same time, I look at the 1080 Ti and go "buh?"

Link to post
Share on other sites
20 minutes ago, Taf the Ghost said:

Ampere is the Volta replacement. It's probably a really good idea to keep that in mind.

With Turing being an offshoot? ?_?

Link to post
Share on other sites
3 minutes ago, thorhammerz said:

With Turing being an offshoot? ?_?

Volta would have been a terrible gaming GPU, as it was a massive die and very little to no gaming Performance uplift over Pascal. So this is the compute-focused architecture from Nvidia. Doesn't mean we won't see the 2000 series get something like 20x5 models in the process. But, the rumor mill has been pretty consistent this isn't Turing's full replacement. This is firstly for Nvidia's Server GPU business.

Link to post
Share on other sites
Posted · Original PosterOP

 


                                                                                                                              .:. Y Gwir Yn Erbyn Y Byd ! .:.

                                                                                                                                     ] Vittoria, o moriamo tutti ! [

                                                         

Spoiler

                                                                            How to free up space on your SSD                                                          

Spoiler
Spoiler

                                                                                          Kymatica Revision:

 

CPU: Intel Core i7-2600k @ 4.3GHz Motherboard: ASRock Z68 Extreme4 Gen3 GPU: Gigabyte GeForce GTX 1660 Ti ~ TU116-400-A1 ~ OC 6G 2x Windforce Memory: G.Skill Ripjaws X Series 16GB @ 2133MHz @ 9-10-11-28 SSD: Crucial M500 240GB (OS/Programs/Path of Exile/Grim Dawn) HDD1: WD 1TB Blue (Diablo III/Other Games/Storage/Media) HDD2: Seagate Barracuda 7.2K 500GB (Backup) HDD3: WD Caviar 7.2K 400GB (Backup) HDD4: WD Elements 4TB External WDBWLG0040HBK-NESN (Backup/Additional Storage)  CPU Cooling: Corsair Hydro Series H100 in Pull (w/ 2x Delta FFB1212EH 120mm) Case Fans: Noctua NF F12 industrialPPC-2000 (x3 120mm) PSU: Seasonic X-Series X-1050 1050W Case: Cooler Master HAF 922 Monitor: Samsung C27F396 Curved 27-Inch Freesync Monitor (@ 1440p @ 72Hz) Keyboard: Cooler Master Storm Trigger-Z (Cherry MX Brown Switches) Mouse: Roccat Kone XTD Mousepad: Corsair MM350 Premium Audio: Logitech X-530 5.1 Speaker System Headset: Corsair VOID Stereo Gaming Headset (w/ Sennheiser 3D G4ME 7.1 Surround Amplifier) OS: Windows 10 Professional (Version 1903 OS Build 18362.592)

 

                                                                                                       

Link to post
Share on other sites
9 minutes ago, Taf the Ghost said:

Volta would have been a terrible gaming GPU, as it was a massive die and very little to no gaming Performance uplift over Pascal. So this is the compute-focused architecture from Nvidia. Doesn't mean we won't see the 2000 series get something like 20x5 models in the process. But, the rumor mill has been pretty consistent this isn't Turing's full replacement. This is firstly for Nvidia's Server GPU business.

Ugh a Turing refresh sounds awful, especially with rumors of PS5 having 5700 XT level performance. No one wants to pay $400 just for console parity.


My system

CPU: Xeon E3-1231v3  Board: MSI Z97S SLI Krait Edition  GPU: EVGA GTX 970 SC ACX 2.0

RAM: G.Skill Ripjaws X DDR3-2400 CAS 11  SSD: Samsung 850 EVO 500GB  HDD0: HGST Deskstar NAS 4TB 7200RPM

HDD1: Seagate 1TB 7200RPM  HDD2: WD Black 1TB 7200RPM  CASE: Phanteks Enthoo Pro  PSU: Antec Neo Eco 620C

Link to post
Share on other sites
1 minute ago, SteveGrabowski0 said:

Ugh a Turing refresh sounds awful, especially with rumors of PS5 having 5700 XT level performance. No one wants to pay $400 just for console parity.

I don't know if Turing is getting a refresh. It would make a lot of sense to do it and replace the 2000 series while adding the 1700 series or something. Ampere should be on someone's 7nm node, and Nvidia takes a while to jump onto new nodes.

Link to post
Share on other sites
13 minutes ago, Taf the Ghost said:

Volta would have been a terrible gaming GPU, as it was a massive die and very little to no gaming Performance uplift over Pascal. So this is the compute-focused architecture from Nvidia. Doesn't mean we won't see the 2000 series get something like 20x5 models in the process. But, the rumor mill has been pretty consistent this isn't Turing's full replacement. This is firstly for Nvidia's Server GPU business.

Considering that some of the rumors involve Nvidia partners AIBs, I would say that there's a good chance that it will be both Turing and Volta replacement. Plus Volta was supposed to be both as far I know, it just didn't work out like Nvidia was probably hoping for.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×