Jump to content

Nvidia just launched a 3 year old GPU

The GPU shortage has hit gamers hard, but Nvidia has an unorthodox solution: Re-release old GPUs. It’s SO crazy that it might actually work… MIGHT.

 

 

Emily @ LINUS MEDIA GROUP                                  

congratulations on breaking absolutely zero stereotypes - @cs_deathmatch

Link to comment
Share on other sites

Link to post
Share on other sites

It's a good entry point for people that want to switch from Console to PC... If it's actually available...

5800X3D -30 CO | 7900 XT | 2x32Gb Vengeance @ 3600 | MSI X470 | Sabrent Rocket 512 | 2x1Tb Vulcan T-Force | Seasonic Focus GX-750 Gold

Dell S3220DGF - curved 32" 1440p 165Hz

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly the 2060 is still a pretty competent 1080p GPU, and if it helps get new GPUs into the hands of the (non-mining) consumers, then I'm all for it.

 

Re-releasing Pascal on the other hand, that's a bit of a tougher sell tbh 😆

MAIN SYSTEM: Intel i9 10850K | 32GB Corsair Vengeance Pro DDR4-3600C16 | RTX 3070 FE | MSI Z490 Gaming Carbon WIFI | Corsair H100i Pro 240mm AIO | 500GB Samsung 850 Evo + 500GB Samsung 970 Evo Plus SSDs | EVGA SuperNova 850 P2 | Fractal Design Meshify C | Razer Cynosa V2 | Corsair Scimitar Elite | Gigabyte G27Q

 

Other Devices: iPhone 12 128GB | Nintendo Switch | Surface Pro 7+ (work device)

Link to comment
Share on other sites

Link to post
Share on other sites

yay more scalping opportunities!

 

                  - a scalper

 

 

also i havent seen a ltt video with a white background in quite a while lol

|:Insert something funny:|

-----------------

*******

#

Link to comment
Share on other sites

Link to post
Share on other sites

12GB is a god damn waste.  There is no scenario where a game can utilize that much and not be running at 10fps driving it.   Hell pretty much anything over 10GB even on 3000 series is a waste for gaming.  I crank everything to the max on my 3090 to use 12GB and I'm down to 30 fps.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, AnonymousGuy said:

12GB is a god damn waste.  There is no scenario where a game can utilize that much and not be running at 10fps driving it.   Hell pretty much anything over 10GB even on 3000 series is a waste for gaming.  I crank everything to the max on my 3090 to use 12GB and I'm down to 30 fps.

This has been explained over and over since the 3060 launched. Due to the design, it was a choice of 6GB or 12GB, they went with the option more in line with AMD and modern times.

 

As for the 3090, I crank everything as well and nothing besides Warzone actually gets close to using it. There is an option in Warzone to take advantage of as much VRAM as you have and it does nothing to the framerate. I've never played a game that just the standard settings even cranked to ultra makes a difference with this card at 3440x1440, certainly not that low. 

Link to comment
Share on other sites

Link to post
Share on other sites

Kinda wish AMD would release the RX 590 on a tiny process trying to hit $100-150 I never got polaris to hit 1500mhz

I edit my posts a lot, Twitter is @LordStreetguru just don't ask PC questions there mostly...
 

Spoiler

 

What is your budget/country for your new PC?

 

what monitor resolution/refresh rate?

 

What games or other software do you need to run?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, rickeo said:

This has been explained over and over since the 3060 launched. Due to the design, it was a choice of 6GB or 12GB, they went with the option more in line with AMD and modern times.

 

As for the 3090, I crank everything as well and nothing besides Warzone actually gets close to using it. There is an option in Warzone to take advantage of as much VRAM as you have and it does nothing to the framerate. I've never played a game that just the standard settings even cranked to ultra makes a difference with this card at 3440x1440, certainly not that low. 

4K with 200% resolution scaling starts pulling in huge amounts of textures but then you're talking 8K rendering which we're at least 2 generations away from being driveable.   It's basically a circle jerk exercise anyways because there's no point in superscaling at a resolution that's already that high to begin with even if it didn't turn the game into a slideshow.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

As someone who uses Daz Studio (that uses Nvidia iRay Tech) for Ray Tracing.. That 12 Gig sounds nice.. some of my scenes are massive.. I regularly Max out my GTX1060's 6GB Vram as it ray traces.. (If you think that's bad, try a GTX 650 Ti ray tracing..)

 

I'll stay with my Ryzen7 5800 and GTX1060 until I think until the next card is worth it, but then I'd want the 3090 at MSRP...

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, rickeo said:

This has been explained over and over since the 3060 launched. Due to the design, it was a choice of 6GB or 12GB, they went with the option more in line with AMD and modern times.

The only reason you see this is because it lets them do 2 things, raise the price of the card, and market based on raw numbers.

If you have a bigger number than your competitors then you have the superior product in the eyes of many consumers. The average person is not that technologically competent, a lot of people still refer to storage drive space as “memory”.

 

Saying your card has 12 gigs, well shit that’s gotta be one powerful gpu right?

Note here as well the amount of people who refer to their video cards by memory quantity alone, “I have an 8gb gpu” and the likes.

 

We saw it with the RX470, there was an 8GB RX470, absolutely nothing an RX470 is capable of would ever come close to using 8gb of video memory. But it let amd show the card as being this more powerful option than the then relevant 6gb 980ti, charge more for the card, despite being overall drastically weaker than a 980ti.

But it tricked enough tech illiterate consumers.

 

Same thing happening here, it’s a cheap gpu to produce, slam a bunch of memory on it and you win over the entire market of 8 and 6gb options out there.

Little Timmy with his mere 8gb GTX 1070 is about to sidegrade to a 12gb 2060 after waiting for years for anything interesting to come out.

Link to comment
Share on other sites

Link to post
Share on other sites

Ongoing chip shortage yet they add an other SKU in their production and a bigger (nm wise ) chip at that? !!! (= less yield per wafer compared to a smaller nm chip) ..... Why not bump the production of the 3060s then to aid in increasing the supply thus lowering the price? yea sure I am convinced.

Link to comment
Share on other sites

Link to post
Share on other sites

Products like this aren’t anything new, just be glad they’re at least calling it a 2060.

Take a look at GT 730’s to watch fermi turn into maxwell. Or the GTX 760ti which is just a gtx 670, the only reason it didn’t hit mass market is because Dell wanted it to be special to them for some reason. Or AMD figuring out how many times they can sell an HD 7970 to the same person.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, papajo said:

Ongoing chip shortage yet they add an other SKU in their production and a bigger (nm wise ) chip at that? !!! (= less yield per wafer compared to a smaller nm chip) ..... Why not bump the production of the 3060s then to aid in increasing the supply thus lowering the price? yea sure I am convinced.

Maybe you skipped that chapter in the video? 6:04 to 8:01. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, rickeo said:

Maybe you skipped that chapter in the video? 6:04 to 8:01. 

I did not I am just not convinced, it stinks from every side you look at it ,

a) there is no proof of that "scenario" they mentioned (12 nm nodes being under utilized)

b) they could make a 3060 (an even call it 3050 ti) on a bigger nm node it would make the chip slightly hotter and slightly slower (due to lower clock speeds) copared to a 3060 but surely faster compared to a 2060
c) even if that scenario (which sounds BS to me) holds true why not just make a faster GPU e.g the RTX 2070 yea it is slightly bigger but seriously the yield differences (especially if we account for better production processes compared to the same nm nodes when first launched half a decade ago) would be negligible 
Or why not a 2060 super which is the same chip more or less than the 2060 just faster.

D) why waste so much vram while also claiming that prices are higher because of vram shortage? 
E) Why handicapping the PCB by having a smaller vram bus while on the same time wasting so much vram on that card? 

Its clearly just to not "eat" profits from other GPU models by limiting the performance while still making a shit ton of money. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, papajo said:

a) there is no proof of that "scenario" they mentioned (12 nm nodes being under utilized)

Thats not the point. You asked "Why not bump the production of the 3060s then to aid in increasing the supply thus lowering the price?" and the reason is because that line is maxed out. Your comment "Ongoing chip shortage yet they add an other SKU in their production and a bigger (nm wise ) chip at that?" implies that you think this chip would interfere with 3000 series card production, but there is no reason to believe that.

1 hour ago, papajo said:

b) they could make a 3060 (an even call it 3050 ti) on a bigger nm node it would make the chip slightly hotter and slightly slower (due to lower clock speeds) copared to a 3060 but surely faster compared to a 2060

designing a new GPU is a lot more work than just using an old one and adding more ram.

1 hour ago, papajo said:

c) even if that scenario (which sounds BS to me) holds true why not just make a faster GPU e.g the RTX 2070 yea it is slightly bigger but seriously the yield differences (especially if we account for better production processes compared to the same nm nodes when first launched half a decade ago) would be negligible 

the 2070 die size is almost 20% bigger than the 2060. That is not negligible.

 

Link to comment
Share on other sites

Link to post
Share on other sites

At least they didn't re-badge it as 3000 series, daz good. 

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, papajo said:

 they could make a 3060 (an even call it 3050 ti) on a bigger nm node it would make the chip slightly hotter and slightly slower (due to lower clock speeds) copared to a 3060 but surely faster compared to a 2060

The 3050 and ti already exist in laptops and I'm pretty sure they are 8 nm plus moving a GPU to a new manufacturing node is not that easy, especially when TSMC already has 12 nm capacity up and running and there is a better chance that they could get capacity on an older node than a bleeding edge new one like 5 or 7 nm

Link to comment
Share on other sites

Link to post
Share on other sites

First of all trying to rebut points by separating them while they are all linked together is kinda pointless (no pun intended) because its like missing the picture. 

Its like I am saying 

"My son John had high fever, loss of taste, and got tested positively for covid and doctors insisted that his life is in danger unless he gets in the ICU" 

And you be like. 
 

Quote

had high fever,

High fever on its own does not mean that he is in danger or has covid

 

Quote

loss of taste

loss of taste could be due to a million reasons including just not brushing his tongue and teeth properly and regularly 

 

Quote

and got tested positively for covid and doctors insisted that his life is in danger unless he gets in the ICU

Being positively tested for covid doesnt mean that you have to go to the ICU especially younger people have the highest statistics of being asymptomatic or having just mild normal symptoms. 
------------------------------------

But anyway lets do this for one time... 

 

55 minutes ago, poochyena said:



designing a new GPU is a lot more work than just using an old one and adding more ram.

 

the 2070 die size is almost 20% bigger than the 2060. That is not negligible.

2 hours ago, papajo said:

Nobody said something about designing a new GPU we are talking about using an existing (but faster than the 2060) architecture namely that of the 3060 on a bigger nm node (its like printing the same thing but now it takes more surface to have the exact same image) 

It has been done before 

 

as for being negligible or not it comes down to wafer size and the node efficiency in terms of defect rate on the wafer's surface area in fact larger die size can mean less defective dies 

Imagine a surface with random dots spread on it, those dots are the defects, now the smaller the die size means that the probability of X dots to accumulate inside this die size is smaller (because otherwise the dot spread would mean that it is not random but focused on a particular area ) 

Which in turn means that if 1 die has a dot (defect) on its surface it increases the chance of the neighboring dies to have one or more dots on their surface too (since the possibility of the dots focosuing on one particular part of the surface is not likely and it is more likely for the dots to be spread on the wafer surface) 

 

On the other hand if your die size is bigger it increases the change that it has more that one dot inside of it simply because its surface is bigger (and assuming that in both scenarios the random dots -since we are talking about the same node- are on average same in number and desnity per wafer) 

Which means less defect dies

Furthermore (depending on the wafer size and edge loss ) it could be that the remaining space around the surface of fully “printed” dies is bigger for the bigger dies (simply because that no useful die could be printed in its entirety near the edge of the waffer) which also means that some of those dots will land on that empty area. 


That in turn also means that given the same wafer size less 545mm squared dies can fit compared to 445 mm ones but the difference is like 9 dies per wafer assuming a 8inch wafer with 1mm edge loss on both scenarios we would have 41 545 dies vs 50 445 dies (obviously in practice optimizations could be made to fit a few more by chaning the orientation of the dies and their placement in case there are cases in a particular node were some of its defects are focused on a particular area) 

Having said that again why not print a 445mm sized 3060 chip or a 2060 super chip which in size is exactly the same as the 2060 non super  heck why not print a GTX 1080 which is faster than a RTX 2060 has smaller die size (314mm on a 16mm node -which according to the "underutilized older node" theory would mean it will be even cheaper, which also means that on a 12 nm node it would be even smaller! ) and would need cheaper and more easily available GDDR5 ram. 

and again why waste the “scarce” DDR6 vram which is one of the supposed reasons of higher GPU prices and by doing so also crippling said vram by using a slower bus speed to communicate with the GPU? 

 

I think they just want to add more profits to their bank account  without  crashing that "shortage" scam that brings them so much money while on the other hand they dont want to decrease sell profits from the 3060 line by making an equally or close to that performing chip which would "eat up" some sales numbers from the other higher SKUs

 

So they just serve the slowest old shit they could do while stille expecting people to be interested on a fancy  platter (double the vram but keeping the bus width narrow so that they limit the GPU performance artificially that way)  

image.png

image.png

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, Jordaneer said:

The 3050 and ti already exist in laptops and I'm pretty sure they are 8 nm plus moving a GPU to a new manufacturing node is not that easy, especially when TSMC already has 12 nm capacity up and running and there is a better chance that they could get capacity on an older node than a bleeding edge new one like 5 or 7 nm

yea as if it will be the first time in GPU history where laptop GPU names collide with desktop GPU names without them being the same performance or having the same cores/CUs

Link to comment
Share on other sites

Link to post
Share on other sites

According to Videocardz who had an opportunity to test the card, it is quite good for mining. It seems to have twice the kH/J of 3060. For someone who hoped to get a cheap 2060 that might sound terrible but we need to think about the whole picture for a moment. If the production of the 2060 is high enough and it won't affect production of 3060s, we might see that a lot of miners will buy 2060 instead of 3060. With that, after some time, we might see prices of 3060 start droping. In theory, if 2060 is twice as efficient as 3060, the price ratio of the two should be getting to 2:1 over time.

 

Disclamer: I don't know much about mining and just looked on the graphs in their tests. This is just an idea that might be worth considering. Maybe the whole reason Nvidia rereleased 2060 was to give something to miners so demand for 3060 is lower without affecting the supply too much?

 

Edit. Hongkongese PCMarket did the tests, not Videocardz. The article I saw is on Videocardz and it is based on PCMarkets data. Also, some ppl say they are comparing 2060s hashrate against incorrect data of other GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, MSharpeWriter said:

As someone who uses Daz Studio (that uses Nvidia iRay Tech) for Ray Tracing.. That 12 Gig sounds nice.. some of my scenes are massive.. I regularly Max out my GTX1060's 6GB Vram as it ray traces.. (If you think that's bad, try a GTX 650 Ti ray tracing..)

 

 

Exactly...as I said in the news thread about the 2060 a few weeks ago and the new news thread about high vram rtx30 cards, a high VRAM card is VERY attractive to 3d Modelers...esepecially those who use NVIDIA's IRAY renderer, which needs all textures and geometry loaded into VRAM first to do GPU rendering rather than CPU (or at least it did a year or two ago....maybe they fixed that while I've not been using it).

 

Just because it might not apply to gaming, doesn't mean that it's not a very wanted feature.

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder if Nvidia is binning lower tier TU106 dies into RTX2060-6GB with double sized memory die to sell at a premium, and will re-release RTX2070-8GB once they gets enough good binned die at an even higher premium. The 2070 has a much better balance of memory bandwidth and capacity than a 2060 with 12GB.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

No others GPU are being produced in 12nm like the RX 590, Linus?

Made In Brazil 🇧🇷

Link to comment
Share on other sites

Link to post
Share on other sites

I have one on hand. As I'm working for a computer shop.

GALAX RTX 2060 PLUS.

They do mark the 12GB VRAM tho.

Won't reveal cost price. Local retail price: $680.
$20 rebate if bundling with a Silverstone VIVA 750W (retail@$80)

 

P/s: Profit margin is usually 10%, you can calculate yourself ideal/logical cost price.

IMG_20211208_145753.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×