Jump to content

RTX 4000 series worth wait or not ?

Winterlight

I think upgrade GPU but not sure anymore if it's worth get 3080 now or wait 4070 ? From leaks looks that all RTX 4000 series have very low memory bandwidth 4070 only 432 GB/s that is looks very strange even my current 2080 have higher. Even 4080 get way lower bandwidth compered too 3080 even 10Gb version have higher. I'm not sure but looks that 4000 series will have memory bottleneck except 4090 ?

Link to comment
Share on other sites

Link to post
Share on other sites

I'd expect them to be announced/released around Q4. If you're happy with your current GPU performance and can wait until then, plus given your concernts, just wait it out. If you want something now, buy something now. We can't answer those questions any sooner then when reviewers release their benchmarks.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

Yea but that memory bandwidth cut looks so strange and I bit scared that after they release 4000 series crypto miners  can back and I can't get GPU at all than for normal price. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Winterlight said:

Yea but that memory bandwidth cut looks so strange and I bit scared that after they release 4000 series crypto miners  can back and I can't get GPU at all than for normal price. 

There was a lot more going on then just crypto miners, but until the rumours are officially confirmed it's not much more than a crystal ball question and anyone's guess.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

Mining is dead in the water right now, I can't see a way of those *negative* rates turning to positive. Meaning electric costs more than gains over it.

As for the memory bandwidth, that alone doesn't tell everything. Sure, higher end of this gen had 700 and above... but like, there's a lot more involved down there. Type of memory used affects speeds, obviously. However there's also way more to end GPU raw power than sheer memory bandwidth alone.

Is that even an accurate information?

Link to comment
Share on other sites

Link to post
Share on other sites

It's simple answer...If you need a card now, buy now. If you want to/can wait, then wait. There's always going to be something newer coming around the corner.

 

Now with regards to leaks, we don't actually know what the final specs of 40-series cards will be yet, so seems kind of pointless making decisions on speculation only. I found the 4070 leaks to be highly suspicious and definitely wouldn't expect a 160-bit card of that price range.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Motifator said:

Mining is dead in the water right now, I can't see a way of those *negative* rates turning to positive. Meaning electric costs more than gains over it.

As for the memory bandwidth, that alone doesn't tell everything. Sure, higher end of this gen had 700 and above... but like, there's a lot more involved down there. Type of memory used affects speeds, obviously. However there's also way more to end GPU raw power than sheer memory bandwidth alone.

Is that even an accurate information?

But if new GPU have more power it possible that mine be worth again and they can back.

Link to comment
Share on other sites

Link to post
Share on other sites

Not to me.

 

I have never really been happy with the performance of any of my previous GPUs.

My FTW3 Ultra 2080 tis had a 24/7 overclocks. With the GTX 1080 ti and 980 ti I used 2(SLI).

My 30 series cards are stock and my games run perfectly on them.

 

What I don't like about them is that they are a bit hard to cool with the exception of the 3090 ti.

 

The other thing I don't like about them is the amount of vram some have.

Going from 11gbs vram with the 1080ti/2080 ti to 12gbs was not an upgrade and my Strix 3080 10gb did not have enough for me to keep.

Not having a sensible16gbs version made 24gbs the only alternative for me. 

 

  

RIG#1 CPU: AMD, R 7 5800x3D| Motherboard: X570 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3200 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX500 2.5" 2TB | Monitor: ASUS ROG Swift PG42UQ

 

RIG#2 CPU: Intel i9 11900k | Motherboard: Z590 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3600 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1300 G+ | Case: Lian Li O11 Dynamic EVO | Cooler: Noctua NH-D15 | SSD#1: SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX300 2.5" 1TB | Monitor: LG 55" 4k C1 OLED TV

 

RIG#3 CPU: Intel i9 10900kf | Motherboard: Z490 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 4000 | GPU: MSI Gaming X Trio 3090 | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Crucial P1 1TB | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

 

RIG#4 CPU: Intel i9 13900k | Motherboard: AORUS Z790 Master | RAM: Corsair Dominator RGB 32GB DDR5 6200 | GPU: Zotac Amp Extreme 4090  | PSU: EVGA 1000 G+ | Case: Streacom BC1.1S | Cooler: EK 360mm AIO | SSD: Corsair MP600 1TB  | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Winterlight said:

But if new GPU have more power it possible that mine be worth again and they can back.

Mining performance is anyone's guess as well. We have no idea how much better they perform compared to the 3000 series (possibly worse if it's actually slower memory since ETH likes speedy memory or if Nvidia keeps LHR around) and we have no idea if crypto will have survived or how much it will be worth half a year from now. TSMC could burn down and we might not even have GPUs at all.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, jones177 said:

Not to me.

 

I have never really been happy with the performance of any of my previous GPUs.

My FTW3 Ultra 2080 tis had a 24/7 overclocks. With the GTX 1080 ti and 980 ti I used 2(SLI).

My 30 series cards are stock and my games run perfectly on them.

 

What I don't like about them is that they are a bit hard to cool with the exception of the 3090 ti.

 

The other thing I don't like about them is the amount of vram some have.

Going from 11gbs vram with the 1080ti/2080 ti to 12gbs was not an upgrade and my Strix 3080 10gb did not have enough for me to keep.

Not having a sensible16gbs version made 24gbs the only alternative for me. 

 

  

Yeah Nvidia really stagnated the VRAM capacity. They used to double every gen. GTX 1080 (x04 chip) at 8GB was double that of the GTX 980 at 4GB (also x04 chip) doubled from 2GB of Kepler equivalent x04 chip in the GTX 680, which was double of the previous x04 chip (GTX 560) at 1GB. Then it stopped moving, all x04 chips have been 8GB since the 1080 for 3 gens now being the 1080, 2080, and 3070.

 

The 80 Ti cards also had a similar stagnation. They were half the VRAM of their gens respective Titan cards for a bit before it stagnated. GTX 780 Ti at 3GB which was half of the Titan Black's 6GB. 980 Ti doubled to 6GB and half the VRAM of the Titan X at 12GB. Then the Pacal titan came out also at 12GB so they made the 1080 Ti 11GB. Titan RTX comes out at 24GB, but then they left the 2080 Ti at 11GB again.

Zen 3 Daily Rig (2022 - Present): AMD Ryzen 9 5900X + Optimus Foundations AM4 | Nvidia RTX 3080 Ti FE + Alphacool Eisblock 3080 FE | G.Skill Trident Z Neo 32GB DDR4-3600 (@3733 c14) | ASUS Crosshair VIII Dark Hero | 2x Samsung 970 Evo Plus 2TB | Crucial MX500 1TB | Corsair RM1000x | Lian Li O11 Dynamic | LG 48" C1 | EK Quantum Kinetic TBE 200 w/ D5 | HWLabs GTX360 and GTS360 | Bitspower True Brass 14mm | Corsair 14mm White PMMA | ModMyMods Mod Water Clear | 9x BeQuiet Silent Wings 3 120mm PWM High Speed | Aquacomputer Highflow NEXT | Aquacomputer Octo

 

Test Bench: 

CPUs: Intel Core 2 Duo E8400, Core i5-2400, Core i7-4790K, Core i9-10900K, Core i3-13100, Core i9-13900KS

Motherboards: ASUS Z97-Deluxe, EVGA Z490 Dark, EVGA Z790 Dark Kingpin

GPUs: GTX 275 (RIP), 2x GTX 560, GTX 570, 2x GTX 650 Ti Boost, GTX 980, Titan X (Maxwell), x2 HD 6850

Bench: Cooler Master Masterframe 700 (bench mode)

Cooling: Heatkiller IV Pro Pure Copper | Koolance GPU-210 | HWLabs L-Series 360 | XSPC EX360 | Aquacomputer D5 | Bitspower Water Tank Z-Multi 250 | Monsoon Free Center Compressions | Mayhems UltraClear | 9x Arctic P12 120mm PWM PST

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Winterlight said:

I think upgrade GPU but not sure anymore if it's worth get 3080 now or wait 4070 ? From leaks looks that all RTX 4000 series have very low memory bandwidth 4070 only 432 GB/s that is looks very strange even my current 2080 have higher. Even 4080 get way lower bandwidth compered too 3080 even 10Gb version have higher. I'm not sure but looks that 4000 series will have memory bottleneck except 4090 ?

Well...First, they are leaks, and those are based on engineering samples, and incorrect interpretation of information. For example, an engineering sample GPU might be connected to a dedicated 1000W PSU, and that was done just to isolate all variable related to power while testing, but it can be assumed by a passer-by/leaker as "Oh WOW this new GPU needs 1000W to operate!!!!".

 

So, official specs will only be known once the official announcement is made.

 

Second, it doesn't matter about the specs... What matters is the performance of your needs being delivered.
A Tesla has a 0-cylinder engine and yet beats an 8-cylinder high performance car in a drag race. If you get what I am saying.

 

Third, GREAT! If it sucks, then no one will buy it, and so the price will go down, and now you have better value for your money.
 

But they are more things than just memory bandwidth. Optimization is also a key player here. If it uses a new compression system which allows lower memory bandwidth, and so, also needs less memory than the competition (something that Nvidia has done in the past) while delivering better performance, then it's a big win (easier to produce, more GPU on shelves and you gain performance) ... We just don't know. It's Nvidia secret sauce. Much like AMD, it also has its share of secret sauces.

 

And so, as we don't know, what matters is the end-result, and see how it affects you.

 

You can be sure that:

  • The 4080 will be more powerful than the 3080
  • The 4090 will be more powerful than the 3090
  • and so on.

If you are thinking that the 4070 will match the performance of the 3090 Ti... that is up in the air... Sometimes Nvidia made such decisions with past models compared to the older highest end models of older gen... other times not so. 

 

It is very unlikely that Nvidia will release a GPU that is weaker than its last gen. While of course, anything is possible, this is where you won't lose sleep at night, and just get the competition instead, or wait until the price drops to a fair value. It will entice Nvidia to do better next time.

 

The big question you should be asking, as others have mentioned above, is: What do you have now, and does it meet your needs? Can you still play games with decent performance with decent visuals? Meaning, yes, we all want the latest and greatest toys, yesterday. And if that, is you, and you have money to burn, then just pre-order today a GeForce 4090 Ti. I am sure if you bring one million dollars to Newegg, they'll make that happen just for you. They might include a free car wash as well. But this is where you need to evaluate if you can wait or not,

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, GoodBytes said:

But they are more things than just memory bandwidth. Optimization is also a key player here. If it uses a new compression system which allows lower memory bandwidth, and so, also needs less memory than the competition (something that Nvidia has done in the past) while delivering better performance, then it's a big win (easier to produce, more GPU on shelves and you gain performance) ... We just don't know. It's Nvidia secret sauce. Much like AMD, it also has its share of secret sauces.

So much this! Optimization is a huge factor in performance. I would think of it like having a Lamborghini but being in rush hour traffic. Great, you can push 700 GB/s, but all of that doesn't mean anything if you're stuck behind a truck moving at 10 MPH. 

Link to comment
Share on other sites

Link to post
Share on other sites

@OP, almost all 30 series cards can run most newer games @1080p with 60-120+ fps at max settings, unless the game is poorly optimized etc. It really depends on your personal needs. If you're looking to save some money, wait for the 40 series to release and buy a 30 series is what I would do at this point. There isn't a lot of games right now that can make a 30 series struggle, unless you're trying to play at higher resolutions. I loaded up BF4 the other night for fun, (yes its a game from 2013) but I was getting 180-200+ fps at 1080p at max with 0 issues. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×