Jump to content

Nvidia and AIBs refuse to seed RTX 4060Ti 16GB Samples to Reviewers | Snubbing Reviewers and Gamers

AlTech
6 minutes ago, AlTech said:

Effectively for almost nothing if the APU costs $250-300. You're maybe paying $50 more compared to a hypothetical 8600. That's not a bad deal if you only need 3050 levels of perf.

This is what I was getting at. The low cost dGPU space could use higher performing "APUs" to fill in that gap.

 

6 minutes ago, AlTech said:

Why not?

A product who's performance was designed around a 256-bit memory bus gets put on 128-bit. Do I really need to say why that will suck?

 

6 minutes ago, AlTech said:

Instead of people having 16GB or 32GB of RAM and 8GB of VRAM they can have 16GB or 32GB shared RAM. Meaning their effective VRAM is whatever the OS + games don't use for RAM.

To an extent. Apple and consoles get away with it as they have a much wider bus. It isn't going to end well on 128-bit DDR5.

 

6 minutes ago, AlTech said:

Cache uses too much die space and requires too much effort. AMD wants to design products that can be dropped into laptop and if they want it on desktop it needs to also be drop in with minimal configuration; just like Renoir and Cezanne on AM4 existed in 4000G series and 5000G series respectively.

I over-trimmed reply somewhere, but if you're going high end APU, you can possibly afford to do some more expensive things along with it. Someone will have to work out the cost-benefit on those options. I wouldn't totally rule them out, but at same time I'm not saying they will happen either.

 

6 minutes ago, AlTech said:

[Citation Needed]

I believe it's a typo. Possibly spread by Nvidia to identify leakers.

Which part? 4N references are varied, but here's one from a more credible site: https://www.tomshardware.com/news/nvidia-details-grace-hopper-cpu-superchip-design-144-cores-on-4n-tsmc-process

 

From less known sources I saw a claim 4N is a derivative of N5, not N4.

 

6 minutes ago, AlTech said:

8GB of VRAM costs a lot less than $100 and Nvidia's marking up the VRAM increase.

Don't confuse BOM costs with retail costs. You do not go into a restaurant and expect to pay for the ingredients.

 

6 minutes ago, AlTech said:

Overall AD106 is expensive because Nvidia chose to make Ada Lovelace Gen expensive the same way they've chosen to make every gen since Turing expensive by designing expensive monolithic dies.

My estimates on silicon cost put nvidia and AMD quite close. For example, NAVI31 + 6x MCD I estimate costs about the same as AD103, and they perform similarly if you limit to raster only.

 

Smaller dies like AD106 size are small enough that yield isn't a major factor. Basically monolithic still has legs on it, and will do for a while yet. AMD have only just started exploring chiplet GPUs this gen with one design. It may make more sense as we go forwards, but it isn't a must have right now.

 

6 minutes ago, AlTech said:

If Nvidia is charging more because AD106 is more expensive then perhaps they should consider investing in chiplet technology instead of ramping up die sizes and spending tons of die space on features few care about like ray tracing.

AMD disagrees with you given there's RT on two gens as well as the consoles.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, leadeater said:

N4 is still a fundamental 5N node and I doubt Nvidia negotiated a bad price so I'll say it's around the $17000 mark. I cannot see it getting close to the $20000 of an actual different node i.e. 3N. Anything more than $18000 I'd be having strong doubts.

My thinking was that a custom anything will cost extra over whatever is standard., but I take your point it shouldn't be that much more it catches up with a bigger jump to the next gen standard offering. For our estimates, close enough.

 

18 minutes ago, leadeater said:

There are AMD GPU dies using TSMC N7 with lower MTr/mm2 than Nvidia Samsung 8LPU dies.

I've not kept up to date on Samsung's offerings since they're rarely talked about in the PC space. However Intel has taught me not to look at transistor density alone as indicator of performance. I think the best example was the Intel 14 updates from Skylake through Coffee Lake. With each generation they increased clocks by reducing density. Similarly AMD's c cores are much higher density than C cores, but the c cores will be lower performing in part from running at lower clocks due to those design choices.

 

18 minutes ago, leadeater said:

Historic doesn't really matter. Today what it would cost now to make RTX 4060 Ti should not be more than the RTX 3060 Ti. We can wiggle around a bit on costing etc but a full $100 USD increase on a product this low in cost is greater than general allowance for that. 

I'm guessing 3060 Ti on sale now were made some time ago, or even if they were made recently, it was from parts priced earlier. I'll take your previous point the die costs might have gone up as much as the area went down, leaving the cost differentials to be elsewhere. Short of a costed BOM breakdown we might never know how much gross margin there is to compare.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, porina said:

Which part? 4N references are varied, but here's one from a more credible site: https://www.tomshardware.com/news/nvidia-details-grace-hopper-cpu-superchip-design-144-cores-on-4n-tsmc-process

 

From less known sources I saw a claim 4N is a derivative of N5, not N4.

Nvidia is using a TSMC Nvidia 4N custom node, both officially state so. It's a TSMC N4 variation like N4P/N4X all 3 being N5 technology basis. What Nvidia's customizations are aren't really known but the same is true for N4P/N4X as well anyway.

 

Even Turing/RTX 20 series used a custom node, TSMC 12 nm FinFET NVIDIA (FFN)

 

GTX 10 series and GTX 16 series did not utilize custom nodes far as I know, could be wrong.

 

Also realistically node customizations are just fab machine configurations and verification/product QA for product suitability and benefits. It's not like "new machines" are required or the likes. But you do have to have dedicated fab runs but far as I know that's standard practice regardless.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, porina said:

This is what I was getting at. The low cost dGPU space could use higher performing "APUs" to fill in that gap.

 

A product who's performance was designed around a 256-bit memory bus gets put on 128-bit. Do I really need to say why that will suck?

 

To an extent. Apple and consoles get away with it as they have a much wider bus. It isn't going to end well on 128-bit DDR5.

Why can't there be 256 Bit DDR5 APUs on AM5?

 

Genuinely curious.

15 minutes ago, porina said:

Don't confuse BOM costs with retail costs. You do not go into a restaurant and expect to pay for the ingredients.

It costs probably costs Nvidia $20 for design complexity to add the 8GB of VRAM. Nvidia could sell the GPU for $479. Not that the value would be much better but ultimately Nvidia's asking a lot more than they need to for both 4060Ti variants.

15 minutes ago, porina said:

My estimates on silicon cost put nvidia and AMD quite close. For example, NAVI31 + 6x MCD I estimate costs about the same as AD103, and they perform similarly if you limit to raster only.

Literally impossible. AMD would need to pay severely inflated wafer costs for them to be even remotely similar.

 

The 6 MCDs are on TSMC N6 which is much cheaper and the dies are much smaller, the N31 die is the only part on N5.

15 minutes ago, porina said:

Smaller dies like AD106 size are small enough that yield isn't a major factor. Basically monolithic still has legs on it, and will do for a while yet. AMD have only just started exploring chiplet GPUs this gen with one design. It may make more sense as we go forwards, but it isn't a must have right now.

No, but it means Nvidia's paying 2-3x more per GPU than AMD.

15 minutes ago, porina said:

AMD disagrees with you given there's RT on two gens as well as the consoles.

Nvidia's implementation uses a significant amount of die space that could otherwise not be spent if they had a more area efficient design.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, porina said:

I've not kept up to date on Samsung's offerings since they're rarely talked about in the PC space. However Intel has taught me not to look at transistor density alone as indicator of performance. I think the best example was the Intel 14 updates from Skylake through Coffee Lake. With each generation they increased clocks by reducing density. Similarly AMD's c cores are much higher density than C cores, but the c cores will be lower performing in part from running at lower clocks due to those design choices.

Same, it's just interesting to see such drastically different achieved transistor density based on actual implemented die and how much is logic area versus cache or I/O etc. It's quite a wild difference between the smaller Nvidia 8LPU die and the other lager ones.

 

You have Samsung marketing for 8LPP/LPU saying you can get 61 MTr/mm2 but actual products can be in the mid 45's to mid 55's, well off "up to" 61.

 

Node density are far as I know are based on logic transistors and quoted generation gains are for logic only.  Non-logic transistors may in fact be fabricated exactly the same as the prior node so I can see why Nvidia wants to cut down the memory bus width as that may be the worst scaling/benefiting from node shrinks while cache is achieving better gains while still less than logic. To achieve 190mm2 the only possible way to get that low may have been 128-bit bus and a 192-bit bus could be a lot larger than we realize. I don't feel like getting a bunch of die shots and calculating percentage of die used for memory bus and trying to figure out what 192-bit would have been in total. I do happen to actually agree with Nvidia's move to L2 cache heavy, maybe some dies are a bit small but the idea is sound. If the GPU is getting feed the data it needs fast enough I'm not going to argue how that should be done, only speak to benefits and draw backs of each different way etc. Insisting on X-bit width I don't think achieves much or allows for technology progression.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, AlTech said:

Why can't there be 256 Bit DDR5 APUs on AM5?

Boards are wired 1DPC/2DPC 128bit, you can't get 256bit wide with 4 DIMM slots as they are only 2DPC wired. They don't go to dedicated pins on the CPU for a separate memory channel and never can because the board traces don't allow for it.

 

It's the inverse issue of X299 supporting two different foundation CPU architectures and if you put in Kaby Lake-X half the DIMM slots went nowhere.

 

AM5 socket and pin layout is based on 128bit memory controller so you also couldn't make a 4 slot 1DPC AM5 board either aka 256bit.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, leadeater said:

Boards are wired 1DPC/2DPC 128bit, you can't get 256bit wide with 4 DIMM slots as they are only 2DPC wired. They don't go to dedicated pins on the CPU for a separate memory channel and never can because the board traces don't allow for it.

 

It's the inverse issue of X299 supporting two different foundation CPU architectures and if you put in Kaby Lake-X half the DIMM slots went nowhere.

 

AM5 socket and pin layout is based on 128bit memory controller so you also couldn't make a 4 slot 1DPC AM5 board either aka 256bit.

I have a feeling like AMD is done putting the APU monolithic dies we saw on Ryzen 3000 and 5000 series on socketable motherboards, with the fact that Ryzen 7000 has an iGPU anyways. I'm still hopeful though, I just don't see a market for it on AM5. Yes the performance of the iGPU isn't nearly as good, but I just don't think the market exists like it used to for the G series, as much as I'd love to buy one.

 

Regarding the RTX 4060ti 16GB, its amusing to think that its going to try to compete with the RX 6800. 

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

The same problem with games pre-ordering:

 

People buying pricey items or services without having the patience to just wait.

 

Waiting has huge benefits: You can more wisely spend your money, you can avoid day-1 bugs.

 

The tiniest bit of patience can help you get burned less often. And you don't even have to wait long - just a day or two.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, AlTech said:

Below $200 GPUs are dead lol besides a few cheap A750 cards.

 

AMD isn't interested in making new GPUs below $200 again.

 

The used market, old stock of previous gen, and future APUs like Strix Halo exist for that.

Doesn’t mean that I won’t let the duo drag my expectations to that point, without the kicking and screaming. 😝

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

What a bad take. Any good review outlet would just buy their own hard and test it themselves. We don't have to wait for "normies" to purchase them. 

 

I also don't see a huge issues as we'll likely see the same performance as the 4060Ti. It's not like it's a different die, it just has more vram. 

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AlTech said:

Why can't there be 256 Bit DDR5 APUs on AM5?

You probably saw Leadeater's longer response by now, but in short, the socket is designed with 128-bit memory interface. I don't know if there are enough reserved pins to allow it, maybe they could if they defined AM5+ for example, but I can't see it happening as much as I would love that. Ram bandwidth is a performance crippling area of consumer tier CPUs.

 

1 hour ago, AlTech said:

Literally impossible. AMD would need to pay severely inflated wafer costs for them to be even remotely similar.

 

The 6 MCDs are on TSMC N6 which is much cheaper and the dies are much smaller, the N31 die is the only part on N5.

We don't know exact costs, and I went with best figures I can find, similar to those posted by Leadeater earlier. One big unknown is how much 4N is costing nvidia, and it is likely more than standard N5 rate. If AMD gets a discount on top, that would be even more difference. But I don't have any way to quantify that so I can only best approximate they're the same.

 

I did estimate previously that if AMD had made NAVI31 monolithic, it would cost about 32% more than the chiplet arrangement. This includes assuming a reduction in MCD equivalent die area from process shrink, and lower yield from a bigger overall die. It does not consider increased manufacturing costs from having to handle chiplets.

 

1 hour ago, AlTech said:

Nvidia's implementation uses a significant amount of die space that could otherwise not be spent if they had a more area efficient design.

If by "area efficient" you mean lower performing like AMD's implementation, no thanks.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, dizmo said:

I also don't see a huge issues as we'll likely see the same performance as the 4060Ti. It's not like it's a different die, it just has more vram. 

Reviews should focus on the difference. What does the extra VRAM get you? They'll have to find scenarios where higher settings can be used with more VRAM, and then what that performance is.

 

I'd be especially interested in seeing side by side comparisons, where on one side is the "best" settings possible on 8GB, and then what you could do with more, and at what performance cost? Some talk like 8GB will look like crap, and while there may be differences I'm not sure they're nearly as big as made out to be.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, AlTech said:

The price is too high. The 4060Ti 16GB should have been $399 at most.

 

The 4060Ti 8GB should either have not come out or been $299 at most.

Because we can't have good GPU launches in 2023?

*FTFY

 

This card being overpriced doesn't deligitimize their concerns that 8GB isn't enough for playing at resolutions above 1080p.

Now you're just putting words in people's mouthes.

It is Nvidia who made this GPU a GPU with 8GB and 16GB VRAM. 12GB instead of both options would be sufficient for most potential 4060Ti buyers if it was $399.

AMD doesnt have anything there at the moment besides the RX 6400 not do they intend to release anything new there.

The 7600 exists to stop people complaining that there's no new GPUs under $300.

 

If you want a GPU for $200 or less, AMD would say get a previous gen card or get an APU. They're not interested in supplying sub $200 GPUs anymore.

They probably would though the APU would need to make sense in AMD's Ryzen lineup.

 

I could see a 6C/12T $250 to $300 R5 APU such as an 8600G using Strix Point and a $350 to $400 8C/16T R7 8700G APU to go along with it.

 

Phoenix APUs exist on laptops and AMD is working on Strix Point and Strix Halo. Yes they'll cost more but they'll also deliver more perf.

I doubt they'll be a massive cache. Strix Point is meant to have a 128 Bit DDR5 bus and Strix Halo is meant to have a 256 Bit DDR5 bus.

It could come to desktop as an R9 8900G if AMD wanted it to. Idk if they want to do that or not tho.

Honestly I feel like if you want to buy a gpu for 200 or less then you should probably be looking in the used anyways. 

Link to comment
Share on other sites

Link to post
Share on other sites

This just makes me happier that I bought a 6750 XT. Current gen is just bad.

CPU - Ryzen 7 3700X | RAM - 64 GB DDR4 3200MHz | GPU - Nvidia GTX 1660 ti | MOBO -  MSI B550 Gaming Plus

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, AlTech said:

Welcome to the future, where you don't know how good or bad a product is until some poor normie buys one and complains about how terrible the value proposition is.

and then returns it and gets his money back? Oh no, what a terrible world we live in... 

 

Or you know... Just wait a little while?

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

We already know what the reviews will be. 3060 6gb vs 12gb but scaled up to 4060 to stats…..

Mix that with about 9 minutes in a 10 minute video of Nvidia Bad and scripts are already written. 

I'm not actually trying to be as grumpy as it seems.

I will find your mentions of Ikea or Gnome and I will /s post. 

Project Hot Box

CPU 13900k, Motherboard Gigabyte Aorus Elite AX, RAM CORSAIR Vengeance 4x16gb 5200 MHZ, GPU Zotac RTX 4090 Trinity OC, Case Fractal Pop Air XL, Storage Sabrent Rocket Q4 2tbCORSAIR Force Series MP510 1920GB NVMe, CORSAIR FORCE Series MP510 960GB NVMe, PSU CORSAIR HX1000i, Cooling Corsair XC8 CPU block, Bykski GPU block, 360mm and 280mm radiator, Displays Odyssey G9, LG 34UC98-W 34-Inch,Keyboard Mountain Everest Max, Mouse Mountain Makalu 67, Sound AT2035, Massdrop 6xx headphones, Go XLR 

Oppbevaring

CPU i9-9900k, Motherboard, ASUS Rog Maximus Code XI, RAM, 48GB Corsair Vengeance LPX 32GB 3200 mhz (2x16)+(2x8) GPUs Asus ROG Strix 2070 8gb, PNY 1080, Nvidia 1080, Case Mining Frame, 2x Storage Samsung 860 Evo 500 GB, PSU Corsair RM1000x and RM850x, Cooling Asus Rog Ryuo 240 with Noctua NF-12 fans

 

Why is the 5800x so hot?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Senzelian said:

and then returns it and gets his money back? Oh no, what a terrible world we live in... 

 

Or you know... Just wait a little while?

 

Or worse, the normie doesn't realize he/she has bought a bad value product and keeps the product.

 

Though this launch in particular has had the added bonus of helpfully identifying people who accept money from Nvidia and also people who worship at the temple of Nvidia and pray to Leather Jacket Man. To be clear: not saying you (Senzellian) do.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Senzelian said:

and then returns it and gets his money back? Oh no, what a terrible world we live in... 

 

Or you know... Just wait a little while?

 

The normie consumer doesn't wait, the normie will buy anything with green on the box, I think the 40 series is good evidence of that being terrible for the price yet people still buy cards like the 4060 8GB over the 6750XT or the 4070Ti over a 7900XT.

Link to comment
Share on other sites

Link to post
Share on other sites

Play stupid games, win stupid prizes... 
Anyone buying these without waiting for proper reviews, failing to see the very obvious massive red flag, deserve all that's coming to them.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, porina said:

I over-trimmed reply somewhere, but if you're going high end APU, you can possibly afford to do some more expensive things along with it. Someone will have to work out the cost-benefit on those options. I wouldn't totally rule them out, but at same time I'm not saying they will happen either.

I don't think "high-end" APU really does any sense. Maybe if it could beat the RTX/RX xx50/x500 (the 200€) class cards within that budget but 200€ and you are looking at probably more like R7 even R9 priced CPU, better RAM like at least 2 stick kit and a bit faster and then you need a MB that can support those so, we probably talk about B-series. Someone out buying the xx50/x500 series GPU would probably be also looking at the more economical parts so let's say R5 CPU, 1x16GB RAM and A-series MB, in Finnish prices the A620 MB would be around 130-150€, R5 7600 is 250€ and single 16GB stick of Kingston DDR5 is around 70€ (4800-6000MHz change the price within 10€, so doesn't really matter); Upgrade to "Ryzen 8500 APU" would be CPU +100€ (R7 7700 350€), MB +100-150€ (B650 MB goes from 150-330€ majority being around 230-300 range), 2x8GB +10-20€ if going for 2x16GB that would be about +50-70€. So, we would be around the same price range, bit over if we calculated in better cooling and possibly better PSU.

 

But that is quite positive outtake. More likely we are going to look at completely reworked MBs that could give more bandwidth, better power delivery, probably bigger socket area and probably incompatibility with the rest of the Ryzen CPU's (as in current style where same MB can be used with or without APU, if it's AM5, it works) which would raise prices because lower volumes. I would say we are looking at 100-300€ more expensive setup than the xx50/x500 dGPU based setup, it would have more generally more performance but that would depends a lot on how much the way more powerful iGPU is going to eat the bandwidths and all that from the CPU.

Like we could be talking about APU with Threadripper-socket (WRX8/EPYC/whatever it currently is), unless AMD would be able to pull out some real development miracle, and at that point we would be in the realms of 600-1000€ for the MB and the APU already getting the 200€ price jump for the way more powerful iGPU.

 

Consoles are consoles, there's so many differences there from the way the OS behaves to the hardware wiring to the optimized APIs and expensive engine optimizations that comparing with what consoles get by to what PC could do is pretty useless. Maybe if we had gaming laptop that would run some unique linux distro that would be completely stripped from everything else except what is needed to run Proton and games and they had managed to get DirectStorage and whatever else more special technologies ported to it, we could make apples to apples comparison. But as long as Windows cannot "hibernate" in background like Xbox OS does when game is running (it basicly shuts down everything not especially needed and goes to use only the small RAM it requires to release the storage media access only to the game etc.) and we cannot force everything not needed to go away when a game starts, already it's apples to oranges.

 

Generally I believe the whole RTX 4000 -series is waste of sand. Nothing really improving if you exclude the DLSS3.0 and REACT which only work in the games they have been made to support, which isn't many in the grand scale of things, also not that impressive technologies even to begin with, artificial frames are still only artificial frames. Like seriously, the 16GB of VRAM doesn't do shit if the GPU is in the same ballpark as the earlier one and with the RTX 4000 series mid-low-end is really that, just marginal upgrade to RTX 3000 to justify higher price and then marketed with artificial frames and AI predicting players inputs to hide the god awful latency those artificial frames create.

That Nvidia doesn't seed the review units tells that even they know they are just milking the cow and trying to keep some marketing hype going to get the least savvy people with RTX 3000 cards to even consider upgrading to RTX 4000 generation and if you ask me, I haven't seen any reason to upgrade from RTX 3060 12GB to anything less than RTX 4080 Poor Edition (RTX 4070). 500-600€ for RTX 4060 Ti with 16GB, if you are fine with 60 FPS gaming, don't bother, go with RTX 3060 12GB for about 350€ and it's fine, you get the extra memory to store bigger textures, you get better memory bandwidth to fill that extra memory and you don't loose that much in the real performance (without artificial frames).

 

I would really hope AMD and Intel would pull the rug from the under of Nvidia and remind them that there's still competition. But I am the negative realistic and I don't think they will. AMD could bring something and then they sell "sold out" and no one cares, Intel brings something that is available mostly because there's that many options in that performance-price range. But thumbs up and let's hope the best.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Thaldor said:

I don't think "high-end" APU really does any sense. Maybe if it could beat the RTX/RX xx50/x500 (the 200€) class cards within that budget but 200€ and you are looking at probably more like R7 even R9 priced CPU, better RAM like at least 2 stick kit and a bit faster and then you need a MB that can support those so, we probably talk about B-series.

It looks like AMD will be offering a 3050-class APU with Strix Halo. But from what I've seen of the rumours it can't be socketed, so primarily targeted at laptops. My thinking was that a higher end APU could still offer value if effective CPU+GPU cost is similar or less than what a separate CPU and dGPU would cost, if they were available. CPUs are fine, but as many said the lower price GPUs don't really exist any more.

 

8 hours ago, Thaldor said:

But that is quite positive outtake. More likely we are going to look at completely reworked MBs that could give more bandwidth, better power delivery, probably bigger socket area and probably incompatibility with the rest of the Ryzen CPU's (as in current style where same MB can be used with or without APU, if it's AM5, it works) which would raise prices because lower volumes.

This is the problem that is difficult to solve. If you stick with AM5, then you're limited to 128-bit ram bus. On this path, I'm thinking the two possible solutions are to either add cache, or add a (small) pool of dedicated VRAM. I guess the two are not so different in a way.

 

If AMD make a new socket, it would fragment and complicate the market. So it feels highly unlikely for that reason alone, especially given how young AM5 is still. 

 

8 hours ago, Thaldor said:

Consoles are consoles, there's so many differences there from the way the OS behaves to the hardware wiring to the optimized APIs and expensive engine optimizations that comparing with what consoles get by to what PC could do is pretty useless.

I'm not thinking of going that far. There's nothing stopping existing software running, just without some of the efficiency gains consoles have from their fixed systems. 16GB system ram is probably going to be a pain point, requiring the move to 32GB, going against the value side a bit. But IMO if you go DDR5 then the entry point is 32GB anyway as 8GB modules (two for 16GB) really suck in performance.

 

8 hours ago, Thaldor said:

Generally I believe the whole RTX 4000 -series is waste of sand.

I feel the 4070 is in a sweet spot, hence I bought one. It is the 3080 I couldn't buy when Ampere was current but cheaper as well as other minor benefits. Win all around. Would I like to have paid even less for it? Sure, but like a lot of things I'd like in life, it is wishful thinking.

 

8 hours ago, Thaldor said:

I haven't seen any reason to upgrade from RTX 3060 12GB to anything less than RTX 4080 Poor Edition (RTX 4070). 

You don't have to upgrade every generation. Do it when you need to.

 

8 hours ago, Thaldor said:

I would really hope AMD and Intel would pull the rug from the under of Nvidia and remind them that there's still competition. But I am the negative realistic and I don't think they will.

I've practically given up hopes on AMD driving GPU as they seem content on lagging nvidia, and when they eventually implement a feature they do it worse. Intel is potentially more disruptive but it may be another couple of generations before they have a measurable impact.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Blademaster91 said:

The normie consumer doesn't wait, the normie will buy anything with green on the box, I think the 40 series is good evidence of that being terrible for the price yet people still buy cards like the 4060 8GB over the 6750XT or the 4070Ti over a 7900XT.

 

16 hours ago, AlTech said:

Or worse, the normie doesn't realize he/she has bought a bad value product and keeps the product.

 

Their problem. If someone just spends 400 bucks blindly without thinking then that person can't complain later. Why protect someone that doesn't give a shit?

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I just realised one thing while reading this thread... (a bit of topic) 

It wasn't so long ago that NVIDIA quietly released the RTX 3060 8GB. Perhaps in hopes that reviewers will compare the new 4060 to that one? 

But it was so spectacularly bad everyone seems to have forgotten anything about it. 

 

There was even a bit heated discussion about if it should be even called 3060 as it was 20% - 30% slower than the 12GB as it also had cut down memory bus. 

 

Not sure what I really wanted to say with this but I find it interesting. Probably was a testing vehicle to see if there will be a pushback but as it was quiet release there was almost none. 

Seems like they used that as a justification to upsell us on junk while still trying to push the envelope even further with the 4060ti.

The 4080 12GB was too much as it backfired spectacularly. It took a whopping name change and $100 discount to quiet masses down. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×