Jump to content

AMD Announces 7900XTX $999, 7900XT $899, Arriving December 13

Mister Woof
19 minutes ago, LAwLz said:

"Marketing exists to sell products, not to give an accurate representation of the product". That isn't a conspiracy theory.

I mean, just look at the graphs. Those benchmarks seems to be quite a far above what I have seen in other reviews, but let's say those benchmarks are valid and not very cherry picked with specific settings to give the biggest possible difference.

The results are:

1.5

1.5

1.5

1.5

1.6

1.7

 

The median increase is 1.5 and the mean is 1.55.

Why do you think the market team decided to highlight the 1.7x instead of the more representative 1.5x or 1.55x? Because their job is to make the product look as good as possible. By writing "1.7x" in big letters (far bigger than the "up to") and by highlighting the biggest difference they were able to achieve, rather than the mean or median, they are trying to give the impression of bigger gains than people will probably experience.

This is not a conspiracy theory. It is literally the job of a marketing department. Their job is not to give an accurate depiction of a product in order to inform consumers in the best and most unbiased way possible. Their job is essentially the exact opposite.

 

I mean come on... This is not limited to AMD either. All companies does this. Some might stick further to the truth than others but they are all more or less guilty of it. The marketing department isn't your friend that only want to inform you in the best way possible but because of circumstances out of their control they ended up telling some small lies or half-truths. They do it intentionally.

My local pizza place that advertises "best pizza in town" are not advertising that because they just want to inform people that according to their rigorous research regarding pizza quality, their pizza is imperially the best and they are just telling everyone because of the goodness in their heart. They advertise that because they hope it will bring in more customers, not because it is necessarily true. Same with AMD, or Nvidia, or Intel, or whichever other company you can think of. 

 

 

AMD probably achieve that 1.7x increase in performance. But I wouldn't be surprised if they achieve it using very specific and "non-standard" settings, on a weird setup where the previous card had issues, or in a very particular area of the game during a limited time. Like maybe they found a place in the middle of the game where for ~2 minutes the results were actually 1.7x, so they chose to highlight that, and in all other areas of the game the gains might have been 40%. All of those things are real possibilities, and something companies might do.

 

 

Here is a citation from Gamers Nexus regarding AMD's benchmarks for the 1800X:

 

AMD, and other companies, are not above doing these types of things to inflate their numbers. They deliberately do these things. It's not oversights. It's to hopefully sell more products by making people believe they are bigger upgrades than they really are.

 

 

 

This will possibly be my last response to this line of conversation because I got a feeling this conversation won't go anywhere. 

I used 1.7x only because of one reason. because its the number tied to the one specific game we looked at, cyberpunk. looking at the skybox between the two GPUs will only LOWER the ratio. 

up to 1.7x IS more representative from their test suit then 1.5x or 1.55x
because its not up to 1.5x, because in some situations it went to 1.7x.
Im not looking at mean or median in my analysis or cyberpunk. Im not arguing that a mean of 70% might exist. Im saying missing 1.7x on cyberpunk means something went wrong between then and now. 


again, up to means caveats. Caveats being... running CYBERPUNK on a system that bottlenecks as little as possible at 4k and whatever settings they used, which was likely ultra, because again you are trying to show as big a difference as possible. the lower the settings, the lower the performance ratio because you are offloading the load to the CPU like you said. Your example of finding two minutes inside the whole benchmark of 70% is not how they benchmarked the card and you know it, the engineers would not stand for that.

 

The skybox for the CPU benchmark is a whole different story because looking at the skybox between two cpus offloads the gpu and puts the load on the CPU exaggerating differences, increasing the ration.
 

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/13/2022 at 9:29 AM, xAcid9 said:

Yesh. 400w+. 3.2ghz

image.png.a3b85962ea656fdf51ac2cd48ffecd85.png

 

oc-cyberpunk.png

 

You know, it might be a good idea for next year.

7950XTX with its driver improvements for 5-10%, 3.1-3.2GHZ, 400W, with the Massive boost in cache for 1200. 
It would significantly beat the 4090 at the 4080 price point. 

As far as I can tell, Nvidia has very few more levers to pull with their flagship, yes a 4090ti can come out with more of the die active. but what wattage? what yields? Perhaps if TSMC drops the cost and yields improve just a hair they can drop the 4090 price to compete with this hypothetical.

Link to comment
Share on other sites

Link to post
Share on other sites

3 days after launch, only a trickle of cards have reached the dutch, on the otherhand scalpers have put them on amazon marketplace for 4080 and 4090 prices

RAM 32 GB of Corsair DDR4 3200Mhz            MOTHERBOARD ASUS ROG Crosshair VIII Dark Hero
CPU Ryzen 9 5950X             GPU dual r9 290's        COOLING custom water loop using EKWB blocks
STORAGE samsung 970 EVo plus 2Tb Nvme, Samsung 850 EVO 512GB, WD Red 1TB,  Seagate 4 TB and Seagate Exos X18 18TB

Psu Corsair AX1200i
MICROPHONE RODE NT1-A          HEADPHONES Massdrop & Sennheiser HD6xx
MIXER inkel mx-1100   peripherals Corsair k-95 (the og 18G keys one)  and a Corsair scimitar

Link to comment
Share on other sites

Link to post
Share on other sites

Seems like all the 7900xtx did was make the 4080 Sell out.  4080 is #`1 sold card on Newegg now.  Crazy. 

CPU:                       Motherboard:                Graphics:                                 Ram:                            Screen:

i9-13900KS   Asus z790 HERO      ASUS TUF 4090 OC    GSkill 7600 DDR5       ASUS 48" OLED 138hz

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Shzzit said:

Seems like all the 7900xtx did was make the 4080 Sell out.  4080 is #`1 sold card on Newegg now.  Crazy. 

That boggles my mind
Ignoring the AMD missing 1.5-1.7x mark, it still beats the 4080 at 85% the price, overclocks well, and we know that some of the performance issues are driver related.

Who in their right mind would buy a 4080 after that.


Something can be both a "disappointment" and still be the better buy.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, starsmine said:

That boggles my mind
Ignoring the AMD missing 1.5-1.7x mark, it still beats the 4080 at 85% the price, overclocks well, and we know that some of the performance issues are driver related.

Who in their right mind would buy a 4080 after that.


Something can be both a "disappointment" and still be the better buy.

Probably because there's no XTX in stock in both Amazon and Newegg right now.  🤔

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, starsmine said:

it still beats the 4080 at 85% the price, overclocks well, and we know that some of the performance issues are driver related.

Who in their right mind would buy a 4080 after that.

On average the XTX barely beats the 4080 in raster and significantly loses in RT. While not applicable for everyone, the software ecosystem around nvidia is just a lot better. I'm not looking to drop 4 figures on a GPU, but if was considering the XTX, I would pay more for the 4080 based on what we know today. AMD is only better if you have limited expectations.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, porina said:

On average the XTX barely beats the 4080 in raster and significantly loses in RT. While not applicable for everyone, the software ecosystem around nvidia is just a lot better. I'm not looking to drop 4 figures on a GPU, but if was considering the XTX, I would pay more for the 4080 based on what we know today. AMD is only better if you have limited expectations.

what part of their exosystem is better now?

AMD has FSR, AMD has AV1, AMD has AMF for streaming. At the start of Last gen sure, but at the start of this gen, AMD is on Par.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, starsmine said:

what part of their exosystem is better now?

AMD has FSR, AMD has AV1, AMD has AMF for streaming. At the start of Last gen sure, but at the start of this gen, AMD is on Par.

If a game supports FSR 2, great, you can use it on nvidia too. If a game only supports DLSS, you're out of luck with AMD/Intel. I'd love to see games going forward support all of DLSS, FSR, XeSS, but many older ones only support DLSS, as I found out doing Arc testing recently.

 

On video encoding/streaming, I do agree it is looking better for AMD now. I believe they got support in a beta version of OBS for example. Still, it is early days and they have to prove themselves whereas nvidia "just works" for many years now.

 

While less important now, CUDA helps too, as oppose to whatever thing AMD is trying to make stick this time around.

 

For me the safe option is still to go nvidia to do it all. RDNA3 doesn't feel like the generation I'd consider them for my main system. They still have much work to do and I still wonder if Intel will overtake them in a couple generations or so if they don't step up their GPU game.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, porina said:

On video encoding/streaming, I do agree it is looking better for AMD now. I believe they got support in a beta version of OBS for example. Still, it is early days and they have to prove themselves whereas nvidia "just works" for many years now.

AMD is still quite a bit behind when it comes to h264 and h265, which are the most important codecs nowadays. Twitch doesn't support AV1, for example.

6 minutes ago, porina said:

While less important now, CUDA helps too, as oppose to whatever thing AMD is trying to make stick this time around.

They aren't even trying, their solution is shit, and I say that from personal experience.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, starsmine said:

AMD has FSR, AMD has AV1, AMD has AMF for streaming. At the start of Last gen sure, but at the start of this gen, AMD is on Par.

FSR is not like DLSS or XeSS. but they have each their own pros and cons. FSR not being like the other ones has its own downsides and benefits.

22 hours ago, porina said:

If a game supports FSR 2, great, you can use it on nvidia too. If a game only supports DLSS, you're out of luck with AMD/Intel. I'd love to see games going forward support all of DLSS, FSR, XeSS, but many older ones only support DLSS, as I found out doing Arc testing recently.

 

While less important now, CUDA helps too, as oppose to whatever thing AMD is trying to make stick this time around.

OneAPI intel just recently I guess got support for other GPUs? maybe

Also you got project streamline, if that is going to work out well... (work between intel and nvidia? maybe AMD too)

https://developer.nvidia.com/rtx/streamline

OneAPI

https://codeplay.com/portal/blogs/2022/12/16/bringing-nvidia-and-amd-support-to-oneapi.html

https://www.intel.com/content/www/us/en/developer/tools/oneapi/training/gpu-optimization-workflow.htm

Quote

Streamline is an open-sourced cross-IHV solution that simplifies integration of the latest NVIDIA and other independent hardware vendors’ super resolution technologies into applications and games. This framework allows developers to easily implement one single integration and enable multiple super-resolution technologies and other graphics effects supported by the hardware vendor.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/3/2022 at 10:20 PM, Shimmy Gummi said:

Midrange segment is unimportant to them. That sucks. Seems neither company wants to focus on that segment at all. Previous announcements included the RTX 3070 and Radeon RX 6800.

i don't think that they don't care, especially amd, as its their bread and butter segment just as it is for nvidia,  but nvidia seems to think more they're in a position not to care about anything... nah, i really think amd wants to focus on the high end, because *thats* what people actually care about,  doesn't matter if they end up with $200 "7300xt" they want to be in the "winner camp" that's what influences their buying decisions (regardless if they may can only afford a lowly low/midrange card)

 

Its a huge reason why nvidia has seen such huge success over the years, they basically always beat amd to the punch in high end tier (which is ironically the segment with the lowest market share, by far)

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mark Kaine said:

i don't think that they don't care, especially amd, as its their bread and butter segment just as it is for nvidia,  but nvidia seems to think more they're in a position not to care about anything... nah, i really think amd wants to focus on the high end, because *thats* what people actually care about,  doesn't matter if they end up with $200 "7300xt" they want to be in the "winner camp" that's what influences their buying decisions (regardless if they may can only afford a lowly low/midrange card)

I think the problem is what we used to consider a mid range GPU was more affordable than it is now. Compared to a few generations ago the same money buys you a tier or two lower. This is leaving somewhat of a gap at the lower end for budget builders. I recall buying a 970 at the end of its marketing life when the 10 series came out. That was around £200 (compared to around £300 when launched). What does £200 get you new today? A 1650, which I think we can agree is way past its marketing life but it is still sticking around. At least AMD's offering at that price point isn't so old, being the 6500 XT.

 

1 hour ago, Mark Kaine said:

Its a huge reason why nvidia has seen such huge success over the years, they basically always beat amd to the punch in high end tier (which is ironically the segment with the lowest market share, by far)

Halo product marketing is a thing. It's practically Apple's reason for existing today. But it doesn't mean you can't be successful without capturing that halo position, just you have to work more smartly.

 

It did make me wonder, when was the last time AMD (ATI?) was ahead of nvidia? I found a chart at following link:

https://www.3dcenter.org/news/die-grafikchip-und-grafikkarten-marktanteile-im-dritten-quartal-2022

 

Last time team red were ahead in unit shipments was 2005. What happened? The long term trend is a decline since then. They gained relatively speaking during the crypto booms, but outside that, it doesn't look like much is happening.

 

What that link did surprise me with is a claim Intel had unit volume of 38% that of AMD in Q3. This is only units that quarter, so AMD of course would have far greater installed share. Also keep in mind the western launch of Arc wasn't until Q4. For units to be counted in Q3 would only be China and any other regions they launched earlier. Maybe Arc sales are because it is the new thing. Would it have staying power?

 

Q3 was a slow quarter so don't take it as indicative of longer term, which we'll have to wait and see how that goes. Q4 would be a bigger quarter to look at, with both the releases of 40 series and now RDNA3 too. It may be GPU buyers stopped buying in Q3 given the launch of so many products in Q4.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, porina said:

I think the problem is what we used to consider a mid range GPU was more affordable than it is now. Compared to a few generations ago [...]

Im not really that versed in GPU "history" because ive always been a console kinda guy -- despite or maybe because -- having relatives who where higher ups at ibm and apple respectively (they may have been called macintosh back then 🤔) ... but i did in recent years watch quite a lot of videos about the pc kinda side and also video game history as well as nVIDIA gpus specifically... in the long run, they have always been expensive from the get go to the point of being outrageously expensive, one of their early gpu cost like $999... and the follow up with much better specs was $499 and so on... they never were cheap and their pricing never really made sense... maybe those 970 days you mentioned were actually more of an outlier (the 1080/ti were also rather affordable,  but that may have also been an exception, considering they were kinda back to their old pricing strategy right afterwards) 

 

For the rest, i agree,  it's basically what i already said and im *very* familiar with marketing tactics,  not at last of apple specifically ~

 

(i always preferred ibm , for the record...)

 

As for:

54 minutes ago, porina said:

Last time team red were ahead in unit shipments was 2005. What happened?

well, i know what happened 

Spoiler

amd sold off ATi, a grave mistake in hindsight 

but people don't want to hear it for some reason... 

 

ps: 

Spoiler

also funny, the first couple of gpus i bought were all ATi, nVIDIA wasn't even a consideration... that stopped immediately when ATi didn't exist anymore and *apparently* Im not alone in that decision making... hence as i said, a grave mistake... and I'm absolutely not convinced amd actually *wants* to be at the top, but strategies can change obviously,  so who knows, at the moment it certainly looks like they're trying,  the proof lies in the pudding though,  as the saying goes = )

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Quackers101 said:

OneAPI intel just recently I guess got support for other GPUs? maybe

 

I wouldn't say that SYCL within OneAPI is exactly a synonym to GPU support. The article you linked just mentions SYCL, which isn't really used in anything currently, so no major framework or tools with be able to work on top of OneAPI with GPU accel.

If you want to look at yet another promising project for a unified GPU language, take a look at PoCL: http://portablecl.org/

 

2 hours ago, Mark Kaine said:

i don't think that they don't care, especially amd, as its their bread and butter segment just as it is for nvidia,  but nvidia seems to think more they're in a position not to care about anything...

IMO Nvidia really doesn't care, why would they waste resources selling smaller chips at lower profit margins when they can just get the scrapes of their DC products and sell at reasonable margins to consumers? Me thinks that nvidia will slowly stop selling consumer GPUs to just focus on the high-end, DC and Workstation markets.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, igormp said:

IMO Nvidia really doesn't care, why would they waste resources selling smaller chips at lower profit margins when they can just get the scrapes of their DC products and sell at reasonable margins to consumers? Me thinks that nvidia will slowly stop selling consumer GPUs to just focus on the high-end, DC and Workstation markets.

yeah, i cant really make sense of nvidia currently... so maybe you're right (would perhaps also explain intels recent interest in dedicated gaming gpus)

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, igormp said:

I wouldn't say that SYCL within OneAPI is exactly a synonym to GPU support. The article you linked just mentions SYCL, which isn't really used in anything currently, so no major framework or tools with be able to work on top of OneAPI with GPU accel.

If you want to look at yet another promising project for a unified GPU language

mostly focused on project streamline, was just recently about intel and their oneAPI.

Unified language? more unified pipelines and supporting different tools, which is always fun.
Dont leave out future options, more that choose to use streamline, more games with DLSS and more marketing.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, igormp said:

If you want to look at yet another promising project for a unified GPU language

standards.png

https://xkcd.com/927/

From a user perspective, not developer, only CUDA and OpenCL matter at all. Let's see if anything else can take over longer term.

 

7 minutes ago, igormp said:

IMO Nvidia really doesn't care, why would they waste resources selling smaller chips at lower profit margins when they can just get the scrapes of their DC products and sell at reasonable margins to consumers? Me thinks that nvidia will slowly stop selling consumer GPUs to just focus on the high-end, DC and Workstation markets.

Nvidia still gets a decent proportion of their revenue from gaming even if data centre has recently overtaken gaming for them. Let's see what the longer term trend is. Still seems unlikely to me for a market leader (by a BIG margin) to just walk away.

 

Haven't been able to find similar breakout for AMD although lumped together consumer facing stuff does seem a bigger share than enterprise for them as far as I can tell, but AMD's product mix is also a lot more complicated.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Quackers101 said:

Dont leave out future options, more that choose to use streamline, more games with DLSS and more marketing.

Streamline has nothing to do with portable languages and CUDA tho, it' just an abstraction over the different gimmicks for super resolution and isn't really important for most developers (apart from game devs ofc).

 

6 minutes ago, porina said:

From a user perspective, not developer, only CUDA and OpenCL matter at all. Let's see if anything else can take over longer term.

Even OpenCL is hated by most developers because it's a mess and not really ergonomic, something that CUDA wins by a landslide, but then is hated for being proprietary and locked-in. I really don't see a new player managing to get that into the market until they can manage to get to a level where some high-level frameworks (like Pytorch or Tensorflow, if talking about ML) adopt them.

 

8 minutes ago, porina said:

Nvidia still gets a decent proportion of their revenue from gaming even if data centre has recently overtaken gaming for them.

It's not really recent, DC has overtaken gaming for quite a while, only in the recent mining craze that their gaming sector surpassed it back, only to fall flat on its face this year.

9 minutes ago, porina said:

Still seems unlikely to me for a market leader (by a BIG margin) to just walk away.

I don't think they'll just walk away, no, but just leave the x80/x90 models for sale without caring about smaller chips, since those can both be used by their workstation/tesla lineup, and not-so-perfect dies can be sold to regular consumer as a form of CUDA-entry-level tool in order to maintain their leadership among devs, and also for those enthusiast gamers that will spend more than the cost of a console on a single GPU.

 

11 minutes ago, porina said:

Haven't been able to find similar breakout for AMD although lumped together consumer facing stuff does seem a bigger share than enterprise for them as far as I can tell, but AMD's product mix is also a lot more complicated.

AMD lumps together their GPU sales with consoles, so it's hard to do a direct comparison indeed.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×