Jump to content

OMG Nvidia GPU Models/Names Ridiculous!

bigjoncoop

I've always been frustrated when looking for NVIDIA gpus and Intel CPUs!

 

For that reason alone I am an AMD fanboy!

 

Unfortunately though I use my gpus for everything but gaming so I usually have to stick with Nvidia gpus for Adobe, etc... 

 

I was just looking to see what's available on sale and it's totally impossible to just look at a GPU model and know if it's better or worse than any other one!

 

For instance a GTX 1080 scores at least 120% higher in all major categories versus a GTX 1650....  can someone explain how that's supposed to make sense?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, bigjoncoop said:

I've always been frustrated when looking for NVIDIA gpus and Intel CPUs!

 

For that reason alone I am an AMD fanboy!

 

Unfortunately though I use my gpus for everything but gaming so I usually have to stick with Nvidia gpus for Adobe, etc... 

 

I was just looking to see what's available on sale and it's totally impossible to just look at a GPU model and know if it's better or worse than any other one!

 

For instance a GTX 1080 scores at least 120% higher in all major categories versus a GTX 1650....  can someone explain how that's supposed to make sense?

Haha you're right ! What you should do is invert the 2 first and last figures

so 1650 => 5016

1080 => 8010 

Then the second is obviously better 😄

 

 

System : AMD R9 5900X / Gigabyte X570 AORUS PRO/ 2x16GB Corsair Vengeance 3600CL18 ASUS TUF Gaming AMD Radeon RX 7900 XTX OC Edition GPU/ Phanteks P600S case /  Eisbaer 280mm AIO (with 2xArctic P14 fans) / 2TB Crucial T500  NVme + 2TB WD SN850 NVme + 4TB Toshiba X300 HDD drives/ Corsair RM850x PSU/  Alienware AW3420DW 34" 120Hz 3440x1440p monitor / Logitech G915TKL keyboard (wireless) / Logitech G PRO X Superlight mouse / Audeze Maxwell headphones

Link to comment
Share on other sites

Link to post
Share on other sites

i mean, as numbering systems go, its not toooo bad?

 

First one or two digits are series, last two are model within that series.

 

the only hiccup is that they jumped from 10 to 16 to 20 for *reasons*. Outwith that, the numbering looks fairly sane, and generally a new generation shifts the performance down one notch (roughly)

 

So a 1080 ~= 2070 ~= 3060

 

You can even go further back: 980ti ~= 1070 ~= 2060 ~= 3050 etc etc.

 

the 16 series was a fill-in for low end 20 series cards which didnt have raytracing cores, and they essentially slot in below the 2060, so a 1660 could be a "2050" and a 1650 could be a "2040" if you like.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, bigjoncoop said:

For instance a GTX 1080 scores at least 120% higher in all major categories versus a GTX 1650....  can someone explain how that's supposed to make sense?

I find it funny that you are completely right about your point of bad naming, but you choose the worst example....

It makes perfect sense that the 1080 is way faster than the 1650 because the 1650 is the bottom of its generation, while the 1080 is the top of its own.

 

Now, you want to really complain? The GTX 1060 is a stunning example. The 3GB and 6GB have a difference in not just memory, but cuda cores! The 1060 6GB should be called the 1060 ti. And the GT 1030... Don't even get me started. Let alone the MX130 for laptops!

 

More recently we're seeing  major complaints about the RTX 4080/4070 ti. They wanted to release two 4080 models, with different ram and cuda cores configurations! Would have been a nightmare, community backlash made them create the 4070 ti. But they should have honestly just called it the 4070 and made it cheaper.

Edited by Fasauceome

I WILL find your ITX build thread, and I WILL recommend the SIlverstone Sugo SG13B

 

Primary PC:

i7 8086k - EVGA Z370 Classified K - G.Skill Trident Z RGB - WD SN750 - Jedi Order Titan Xp - Hyper 212 Black (with RGB Riing flair) - EVGA G3 650W - dual booting Windows 10 and Linux - Black and green theme, Razer brainwashed me.

Draws 400 watts under max load, for reference.

 

How many watts do I needATX 3.0 & PCIe 5.0 spec, PSU misconceptions, protections explainedgroup reg is bad

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, bigjoncoop said:

For that reason alone I am an AMD fanboy!

 

Unfortunately though I use my gpus for everything but gaming so I usually have to stick with Nvidia gpus for Adobe, etc... 

At least you didn't have the endure the Russian roulette that is installing AMD graphics driver,

Every time i updated the drivers i had to pray that things will work as they should, and in a lot of cases they didn't.

 

23 minutes ago, bigjoncoop said:

I was just looking to see what's available on sale and it's totally impossible to just look at a GPU model and know if it's better or worse than any other one!

 

For instance a GTX 1080 scores at least 120% higher in all major categories versus a GTX 1650....  can someone explain how that's supposed to make sense?

GTX 1650 = 1050 Ti

GTX 1650 Super = GTX 1060

GTX 1660 = GTX 1070

GTX 1660 Ti = GTX 1070

GTX 1660 Super = GTX 1070

RTX 3050 = GTX 1070

RTX 2060 = GTX 1080

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Dean0919 said:

they invented ray tracing

Nvidia didn't invent ray tracing.

 

I actually like the naming scheme of nvidia GPUs I just wish they didn't botch the whole thing

 

XXYYaa

 

xx = generation

yy = model -> in order of lowest to highest tier

aa = ti/super

 

ti for higher performance version of the regular card

super for mid generation updates.

 

 

and then for specialty cards (like the titan) they get specific names
The problem comes in with not being specific, using weird intervals, and inconsistent naming.

Why would nvidia make the same model number cards have different specs, where the only difference should be the amount of available ram.

 

a xx90 should always have been the top tier consumer card, with anything higher being specifically named.

if a year into the lifecycle of the card there is enough found performance, then a ti or super can be released.

 

 

If your question is answered, mark it so.  | It's probably just coil whine, and it is probably just fine |   LTT Movie Club!

Read the docs. If they don't exist, write them. | Professional Thread Derailer

Desktop: i7-8700K, RTX 2080, 16G 3200Mhz, EndeavourOS(host), win10 (VFIO), Fedora(VFIO)

Server: ryzen 9 5900x, GTX 970, 64G 3200Mhz, Unraid.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Vishera said:

At least you didn't have the endure the Russian roulette that is installing AMD graphics driver,

Every time i updated the drivers i had to pray that things will work as they should, and in a lot of cases they didn't.

With  AMD its usually a good idea to download and keep the install file that you know worked before any updates, I always have at least the last three versions that were stable for my hardware

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, bigjoncoop said:

can someone explain how that's supposed to make sense?

I think NVidia explained the naming scheme as it being between RTX 2000 and GTX 1000 in therms of performance but it was closer to the RTX 2000, that's why they went with 16. But yeah, I don't know what they should have named it otherwise as it does have the turing architecture but it doesn't have the 20 series exclusive tensor and RTX cores

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, bigjoncoop said:

I've always been frustrated when looking for NVIDIA gpus and Intel CPUs!

 

For that reason alone I am an AMD fanboy!

 

Unfortunately though I use my gpus for everything but gaming so I usually have to stick with Nvidia gpus for Adobe, etc... 

 

I was just looking to see what's available on sale and it's totally impossible to just look at a GPU model and know if it's better or worse than any other one!

 

For instance a GTX 1080 scores at least 120% higher in all major categories versus a GTX 1650....  can someone explain how that's supposed to make sense?

I also don't really understand AMDs naming scheme in terms of the XT, X, now XTX. Why are cards named XT per standard. Wasn't XT meant to be the Ti but for AMD and the X for the super of NVidia? Because with RX 6000 series, I could only see "normal" cards and XT cards. Now with 7000 series, theres "normal" ones, XT ones and XTX ones. Is XTX soon gonna become the standard and the better version is gonna be called XTXT? Wasn't that naming scheme for Ryzen CPUs too? Like non-X(T) being the lowest-tier of that CPU, X being the better one and XT being the best one?

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, bigjoncoop said:

I've always been frustrated when looking for NVIDIA gpus and Intel CPUs!

 

For that reason alone I am an AMD fanboy!

 

Unfortunately though I use my gpus for everything but gaming so I usually have to stick with Nvidia gpus for Adobe, etc... 

 

I was just looking to see what's available on sale and it's totally impossible to just look at a GPU model and know if it's better or worse than any other one!

 

For instance a GTX 1080 scores at least 120% higher in all major categories versus a GTX 1650....  can someone explain how that's supposed to make sense?

from the best of my knowledge and nvidia historically, the first numbers indicate series/generation the last two numbers indicate class within said series/generation

 

....you think AMD has better naming?

okay... then explain to me RX580, R9, Vega56, RX5500 <--which of these is faster? lol  

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, DreamCat04 said:

I think NVidia explained the naming scheme as it being between RTX 2000 and GTX 1000 in therms of performance but it was closer to the RTX 2000, that's why they went with 16. But yeah, I don't know what they should have named it otherwise as it does have the turing architecture but it doesn't have the 20 series exclusive tensor and RTX cores

The 16 series can use RTX but not DLSS.

I tested RTX with my GTX 1660 using the NVIDIA RTX demos and playing Control,

RTX on 16 series cards is not a good idea, but 30 FPS 1080p is pretty much the performance i get in all RTX games and demos i tried.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Vishera said:

The 16 series can use RTX but not DLSS.

I tested RTX with my GTX 1660 using the NVIDIA RTX demos and playing Control,

RTX on 16 series cards is not a good idea, but 30 FPS 1080p is pretty much the performance i get in all RTX games and demos i tried.

Ah a 1660(Ti) supports raytracing? Well in a raytraced game where I get around 30-40FPS on my 2060, I get like 7-9 on my 1660Ti (although it's the 115W laptop version and it's drawing like 90W)

Link to comment
Share on other sites

Link to post
Share on other sites

I wish GPUs has somthing similar to cpu's i3, i5, i7, i9 / r3, r5, r7, r9 atleast we could all understand there's a class of performance expected

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

I mean I don't really see how AMD's naming scheme is any better or worse than Intel's or Nvidia's right now. Sure, Nvidia has the 16 series, but let us not forget AMD's long-running mismatch between mobile/desktop CPU SKUs for example, or the fact that they went from the 500 series to the 5000 series.

Out of all the things I don't like in Nvidia, their naming scheme ranks pretty low on that list.

 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Coaxialgamer said:

I mean I don't really see how AMD's naming scheme is any better or worse than Intel's or Nvidia's right now. Sure, Nvidia has the 16 series, but let us not forget AMD's mismatch between mobile/desktop CPU SKUs for example, or the fact that they went from the 500 series to the 5000 series.

 

Wasn't there also Ryzen 4000, which was purely mobile (except for the PS5 which used the architecture of 4000 series iirc

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Coaxialgamer said:

I mean I don't really see how AMD's naming scheme is any better or worse than Intel's or Nvidia's right now. Sure, Nvidia has the 16 series, but let us not forget AMD's mismatch between mobile/desktop CPU SKUs for example, or the fact that they went from the 500 series to the 5000 series.

 

Right, I've been meaning to ask a similar question in the forum, as to the meaning if any behind these numbers, as in is there some individual component/revision of component that they get these from or do they just have some fresh out of college punk with a minor in public relations telling them hey guys 7000 sounds coolest to the kids on reddit this year

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, DreamCat04 said:

Wasn't there also Ryzen 4000, which was purely mobile (except for the PS5 which used the architecture of 4000 series iirc

yes, and I was hoping they would continue this naming method it would have made everything so much more clear to keep it that way

odd numbers = desktop / evens = mobile

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, DreamCat04 said:

Wasn't there also Ryzen 4000, which was purely mobile (except for the PS5 which used the architecture of 4000 series iirc

yes, and iirc the mobile SKUs started with the 2000 series while desktop parts got a 1000 series, and the desktop 2000 series acually ended up using a different architecture...

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, DreamCat04 said:

I get like 7-9 on my 1660Ti (although it's the 115W laptop version and it's drawing like 90W)

I am using a desktop GPU which is significantly faster, my GPU is also overclocked and it draws 140W...

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, lotus10101 said:

Right, I've been meaning to ask a similar question in the forum, as to the meaning if any behind these numbers, as in is there some individual component/revision of component that they get these from or do they just have some fresh out of college punk with a minor in public relations telling them hey guys 7000 sounds coolest to the kids on reddit this year

I seem to remember that the jump to the 7000 series was an effort on AMD's part to match the generation number between its desktop and mobile CPUs (a self-inflected problem might I add). The fact that it also represented a shift to DDR5/AM5 also made it also made it as good a time as any to do that.

Both companies usually try to explain away their naming schemes on their websites, but their increasing complexity can only really mean that it's hardly meaningful or self-consistent.

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Coaxialgamer said:

I seem to remember that the jump to the 7000 series was an effort on AMD's part to match the generation number between its desktop and mobile CPUs. The fact that it also represented a shift to DDR5/AM5 also made it also made it as good a time as any to do that.

Both companies usually try to explain away their naming schemes on their websites, but their increasing complexity can only really mean that it's hardly meaningful or self-consistent.

And also I don't understand why AMD made like 6750XTs. Like why make the 700-class card even better? Was that an attempt at saying like "Hey, our 700-class GPU can kick the a$$ of Nvidias 3070! (even though the 6700XT can't(no clue how that is))"?

 

5 minutes ago, Vishera said:

I am using a desktop GPU which is significantly faster, my GPU is also overclocked and it draws 140W...

On "normal" games like Fortnite, I can crank the settings and get more than 60FPS, sometimes spiking/peaking to the 90s

Edited by DreamCat04
Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Coaxialgamer said:

I seem to remember that the jump to the 7000 series was an effort on AMD's part to match the generation number between its desktop and mobile CPUs (a self-inflected problem might I add). The fact that it also represented a shift to DDR5/AM5 also made it also made it as good a time as any to do that.

Both companies usually try to explain away their naming schemes on their websites, but their increasing complexity can only really mean that it's hardly meaningful or self-consistent.

I think they've all gotten to the point alot of software and game companies have gotten to where they just stop trying to add numbers and just call it what it is and update endlessly, they should just class then such as i7,i9/r7r9 etc... And give us Rev numbers much like AMD drivers which is just date of release and version there within the release 

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, DreamCat04 said:

And also I don't understand why AMD made like 6750XTs. Like why make the 700-class card even better? Was that an attempt at saying like "Hey, our 700-class GPU can kick the a$$ of Nvidias 3070! (even though the 6700XT can't(no clue how that is))"?

It's a mid-generation refresh. Basically, through advancements in manufacturing AMD is now able to clock their cards a bit higher out of the factory. It's actually a fairly common pratice in the industry, for example:

  • the entire AMD RX 500 series and arguably the 300 series too
  • A decent chunk of the AMD RX 200 series
  • Most of Nvidia's 700 series, and their budget cards
  • Intel's 4790K refresh (and to some extent the 7000 series)
  • and more

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Coaxialgamer said:

It's a mid-generation refresh

Ah okay. I wasn't too much in the space for PC parts when that came out, I only really got into it like a year ago or so

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, DreamCat04 said:

And also I don't understand why AMD made like 6750XTs.

It was a refresh to the generation and an homage to the HD 6750, just like how the 6950 XT was an homage to the HD 6950.

And SAPPHIRE did their own homage there - The 6950 XT TOXIC was an homage to the HD 6950 TOXIC.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×