Jump to content

AMD's new Radeon RX 3080 XT: RTX 2070 performance for $330?

Message added by WkdPaul

It's completely fine to disagree and have a different point of view.

 

But please construct your arguments thoughtfully and without ad-hominem, antagonizing or passive-aggressive comments.

2 minutes ago, Trixanity said:

I was under the impression that the argument was absolute performance sells cards; not price, packaging, power consumption or what have you. By that logic a blower cooler shouldn't matter in sales as long as it's the top dog in performance.

There's no one single thing, having the best certainly does help but it's still entirely possible to tie your shoes together and try and sprint i.e. loud blowers ?.  Seriously my 290X's are reference cards, full cover blocks but I ran them with the blowers to see and they are THAT bad. Timing is a rather big factor that needs to accompany having the top performance product, X800 vs X1000 is a decent example of that. Likewise Geforce 8, uncontested market followed by a 2 quarter late AMD product failure.

 

Have to also consider that before Geforce 900 series it's mostly been a 60/40 split which is not exactly a wide difference, not compared to now. There are various reasons why it kept at that ratio, in part covered by my previous post.

 

AMD R200 was also a full year too late in terms of product cycles, had it even been 1 quarter earlier that graph at that point probably would look a decent bit different. After R200 it's only been refreshes or low stock high cost HBM products unable to beat or match already existing Nvidia products. Pre and including R200 AMD is holding 60/40, post R200 market share dives and the common trend for this time period is no top class competitive products.

 

AMD may not be able to fully capitalize on top performance class products when they have them but when they don't have anything close enough to that it's pretty clear what that entails looking at the market data. In the absence of such a product market share is terrible, when they do have something close enough market data shows a much closer share so that to me shows that this is a significant factor among others. Middle range cards with good coolers from day one does not have this effect.

 

I will caveat my above assessment with if it were not for mining I don't believe the market share would have drop down to 20% like it did

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, leadeater said:

There's no one single thing, having the best certainly does help but it's still entirely possible to tie your shoes together and try and sprint i.e. loud blowers ?.  Seriously my 290X's are reference cards, full cover blocks but I ran them with the blowers to see and they are THAT bad. Timing is a rather big factor that needs to accompany having the top performance product, X800 vs X1000 is a decent example of that. Likewise Geforce 8, uncontested market followed by a 2 quarter late AMD product failure.

 

Have to also consider that before Geforce 900 series it's mostly been a 60/40 split which is not exactly a wide difference, not compared to now. There are various reasons why it kept at that ratio, in part covered by my previous post.

 

AMD R200 was also a full year too late in terms of product cycles, had it even been 1 quarter earlier that graph at that point probably would look a decent bit different. After R200 it's only been refreshes or low stock high cost HBM products unable to beat or match already existing Nvidia products. Pre and including R200 AMD is holding 60/40, post R200 market share dives and the common trend for this time period is no top class competitive products.

 

AMD may not be able to fully capitalize on top performance class products when they have them but when they don't have anything close enough to that it's pretty clear what that entails looking at the market data. In the absence of such a product market share is terrible, when they do have something close enough market data shows a much closer share so that to me shows that this is a significant factor among others. Middle range cards with good coolers from day one does not have this effect.

 

I will caveat my above assessment with if it were not for mining I don't believe the market share would have drop down to 20% like it did

I think what hurts AMD the most is the lack of consistency in execution. It kills all confidence in the product stack and kills any narrative they try to establish.

 

If AMD could consistently shake up the mid tier every 12-18 months I think their volume would be pretty good actually. By that I mean a yearly story of AMD beating the 50, 60 and 70 series of Nvidia in performance, price and misc metrics (packaging, software platform, power efficiency etc) would absolutely do wonders for AMD. Not to the point of toppling Nvidia but it would change the landscape for the better. I'd wager things would look a lot different despite the lack of a halo chip. It would still obviously be better with a behemoth in the stack but let's just entertain the argument for a moment.

 

Instead we get staggered releases across months or even years with half bad products. It will have taken them more than 3 years to replace Polaris. It took them two years to go from Fury X to Vega 64 and both kinda sucked. That'll kill all your momentum if you had any. GCN was only truly competitive in the first iteration or two and it launched in 2011. It's absolutely dead in the water since Maxwell v2 came in 2014 and we're still not done with GCN. If we assume GCN gets replaced in 2020 that would have been 9 years on the same architecture and we're likely to get 10. It would be a horror story for any company. Just doubly so for a company who've struggled on both fronts at the same time.

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Master Disaster said:

Oh boy, so many young whipper snappers here.

 

The XT is a throwback to a legendary card from a long time ago...

 

https://www.techpowerup.com/gpu-specs/sapphire-9800-xt.b243

 

At a time when Nvidia were dominating the GPU market AMD dropped this thing, it performed similarly but was way cheaper and to top it off, they even got a Valve Half Life 2 partnership with 9800 Pro and XT models saying "Perfect for Half Life 2" right on the boxes. Some boxes even had Gordan and Alyx on them.

No, nVidia didn't really dominate, quite the opposite. It was when ATi had the superior Architecture and came with a 256bit wide Interface, some kind of Hierarchical Z stuff and 8 pipelines wide while nVidia only had 4 (though with 2 TMU while ATi had an 8x1 configuration. The funny thing with the Radeons is that they started with 2x3, went to 4x2 with the R200 and ended with 8x1)

 

That was also the time when nVidia was credibly accused of cheating -> https://www.geek.com/games/futuremark-confirms-nvidia-is-cheating-in-benchmark-553361/

Because their architecture just sucked, especially when Shader were used. At the time they rewrote shaders and forced them to be executed in 16bit instead of 32bit...

THe DX9 spec demanded at least 24bit Precision, wich nVidia didn't have, only 16bit and 32bit (and 16bit was faster than 32bit for whatever reason).

 

 

The reason for that was that the Radeon R3x0 were just wider than the Cine FX architecture and also the shaders were independant from the texturing pipeline.

 

Back in the day people were also claiming to buy nVidia because of "better AF" or "better drivers" was the classic. 
I remember that Gamestar in one of their Print did a massive test of older titles and, you guessed it, ATi won. Sadly I don't remember and didn't save that magazine (although I should have!)...

 

PS: developers at the time claimed that PS1.4 was pretty close to 2.0 and should have been called 1.5 or something closer to 2.0...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, Trixanity said:

I think what hurts AMD the most is the lack of consistency in execution. It kills all confidence in the product stack and kills any narrative they try to establish.

No, its the Media that bashes AMD for no reason but is very soft to nVidia. Seen something like that with Steve Burke and his teardown of the IIRC 2060 it was, where he was softballing nVidia for the bullshit they did with the fans -> you have to destroy the cooler to get the fans off the heatsink for "Cleaning"...

 

But we all know that if they are "mean" to nVidia and criticize them for the stuff, they get blacklisted and don't get no samples no more.

AMD is not in a position to do that.

 

Also the "Mindshare" is a Problem as well...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

*Thread cleaned and unlocked*

If you need help with your forum account, please use the Forum Support form !

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, cj09beira said:

amd really needs aibs since day 1, maybe they should set the gpu height and the screw holes, much sooner so that at least we get custom coolers at launch, really thats all amd cards need their pcbs are great. 

if i ever saw lisa that would be the one thing i would roast her for, and focusing on yields on choosing stock voltages.

If they don't have aib models at launch then the stock card better be something really well built and elegant like the Radeon 7 with 3 fans, heatpipes, vapour chamber, backplate etc.

 

If they give us only a blower cooler at launch people will be pissed, and it will show that AMD has learned nothing.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Humbug said:

If they don't have aib models at launch then the stock card better be something really well built and elegant like the Radeon 7 with 3 fans, heatpipes, vapour chamber, backplate etc.

 

If they give us only a blower cooler at launch people will be pissed, and it will show that AMD has learned nothing.

Wonder if they'll keep the radeon VII cooler for all models, that actually looks really nice.

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

yea... I'm gonna be skeptical until I see it. Sure it'd be wonderful if AMD offered some competition but it's also rather unlikely and they wouldn't try to undercut Nvidia by that much. If they've got similar performance I'd expect the card to be pretty similarly priced to what Nvidia's currently offering, maybe 20-50 bucks less if they want to regain market share.

Link to comment
Share on other sites

Link to post
Share on other sites

How does this thread already have a warning on it, I havent even posted yet? yall need to calm down

I basically dont care about Navi until I see a product, unlike Jim Keller and Papermaster, I dont trust Raja and RTG. Wang couldnt help Navi very much but hopefully the playstation 5 hype is a good sign

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Beskamir said:

yea... I'm gonna be skeptical until I see it. Sure it'd be wonderful if AMD offered some competition but it's also rather unlikely and they wouldn't try to undercut Nvidia by that much. If they've got similar performance I'd expect the card to be pretty similarly priced to what Nvidia's currently offering, maybe 20-50 bucks less if they want to regain market share.

AMD providing competition in the mid-range is not unlikely. It is very likely. They do generally provide good price/perf in this price range each time they launch a new product. This is a new micro architecture combined with a new process node giving higher clocks. It's not like Nvidia is at some untouchable performance levels in the mid and lower range.

 

Like you said AMD may decide not to undercut Nvidia pricing by large amounts, but they also will not release slow trash products like what Nvidia did with the gtx 1650.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Humbug said:

Like you said AMD may decide not to undercut Nvidia pricing by large amounts, but they also will not release slow trash products like what Nvidia did with the gtx 1650.

At least not at the price Nvidia does. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Humbug said:

AMD providing competition in the mid-range is not unlikely. It is very likely. They do generally provide good price/perf in this price range each time they launch a new product. This is a new micro architecture combined with a new process node giving higher clocks. It's not like Nvidia is at some untouchable performance levels in the mid and lower range.

 

Like you said AMD may decide not to undercut Nvidia pricing by large amounts, but they also will not release slow trash products like what Nvidia did with the gtx 1650.

Except a 2070 isn't exactly mid-range... especially not if you want hardware accelerated tensor computations or ray-triangle intersections. Should AMD somehow try to outperform Nvidia's hardware accelerated cards in tasks where Nvidia's cards are superior but without using custom hardware like Nvidia did, then their cards would be overall much more powerful than what Nvidia has to offer and will be priced as such. Not to mention a feat like that would basically require AMD to have some kind of magic tech that Nvidia doesn't have yet.

 

The only reasonable explanation for the significantly lower price is that AMD will lack any novel hardware accelerated computation and will just try to match the 2070 in rasterization and normal compute shaders. In which case it's basically competing with a 1660 instead of a 2070.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Beskamir said:

Except a 2070 isn't exactly mid-range... especially not if you want hardware accelerated tensor computations or ray-triangle intersections

It's neither high end either. From a gamers perspective the x50/x50Ti cards were low end, x60 and x70 mid range and x80/x80Ti were high end. For the x80 Ti a lot of the time the performance uplift was enough to put it in it's own performance class but that's the general shake down of things from a gaming perspective. Cards below x50 just can't really game, Geforce 10 series did vastly improve those ultra low end cards though and Geforce 16 & 20 series appears to be contining that trend.

 

For the low end cards it makes no difference that the Tensor cores and RT cores are not present, a card of the scale factor would not be able to do that task. They are still Turing CUDA architecture. I don't see many people wanting RTX at 480p.

 

What's happened recently is the prices have increased to a point where people are starting to question the traditional product segmentation and are now trying to categorize products based on price, I don't agree or not agree with this it's just what's happening. Nvidia I get the feeling has started to notice this along with the realization that fewer people than which they expect or wish can afford these now higher prices which is why I think the 16 series exists, this puts all the traditional product performance categorization in to a bit of chaos because we now have two current generation x60 products. I don't even think Nvidia really wanted the 16 series to exist either, not based on their technology strategy anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

It's neither high end either. From a gamers perspective the x50/x50Ti cards were low end, x60 and x70 mid range and x80/x80Ti were high end. For the x80 Ti a lot of the time the performance uplift was enough to put it in it's own performance class but that's the general shake down of things from a gaming perspective. Cards below x50 just can't really game, Geforce 10 series did vastly improve those ultra low end cards though and Geforce 16 & 20 series appears to be contining that trend.

 

For the low end cards it makes no difference that the Tensor cores and RT cores are not present, a card of the scale factor would not be able to do that task. They are still Turing CUDA architecture. I don't see many people wanting RTX at 480p.

 

What's happened recently is the prices have increased to a point where people are starting to question the traditional product segmentation and are not trying to categorize products based on price, I don't agree or not agree with this it's just what's happening. Nvidia I get the feeling has started to notice this along with the realization that fewer people that which they expect or wish can afford these now higher prices which is why I think the 16 series exists, this puts all the traditional product performance categorization in to a bit of chaos because when now have two current generation x60 products. I don't even think Nvidia really wanted to the 16 series to exist either, not based on their technology strategy anyway.

The 16 series to me looks like an after thought to me.  Like they had a bunch of GPU's they didn't know what to with.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, mr moose said:

The 16 series to me looks like an after thought to me.  Like they had a bunch of GPU's they didn't know what to with.

Pretty much can't be because they are dedicated dies using actually newer architecture.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Pretty much can't be because they are dedicated dies using actually newer architecture.

Are we certain Nvidia can't change the Die name after manufacture?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, mr moose said:

Are we certain Nvidia can't change the Die name after manufacture?

Yes because the dies physically don't have Tensor or RT cores in them. It's not a case of deactivation.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, leadeater said:

It's neither high end either. From a gamers perspective the x50/x50Ti cards were low end, x60 and x70 mid range and x80/x80Ti were high end. For the x80 Ti a lot of the time the performance uplift was enough to put it in it's own performance class but that's the general shake down of things from a gaming perspective. Cards below x50 just can't really game, Geforce 10 series did vastly improve those ultra low end cards though and Geforce 16 & 20 series appears to be contining that trend.

 

For the low end cards it makes no difference that the Tensor cores and RT cores are not present, a card of the scale factor would not be able to do that task. They are still Turing CUDA architecture. I don't see many people wanting RTX at 480p.

 

What's happened recently is the prices have increased to a point where people are starting to question the traditional product segmentation and are now trying to categorize products based on price, I don't agree or not agree with this it's just what's happening. Nvidia I get the feeling has started to notice this along with the realization that fewer people than which they expect or wish can afford these now higher prices which is why I think the 16 series exists, this puts all the traditional product performance categorization in to a bit of chaos because we now have two current generation x60 products. I don't even think Nvidia really wanted the 16 series to exist either, not based on their technology strategy anyway.

Yeah, unfortunately the hardware acceleration found in Nvidia's 20 series doesn't really matter that much for the average consumer. Especially when it adds so much to the costs of those cards. I worry that if AMD completely ignores hardware acceleration we may be facing another set of Nvidia only features (even though this time Nvidia seems okay with making rtx manufacture agnostic) and thus likely will not be used to it's fullest potential.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Yes because the dies physically don't have Tensor or RT cores in them. It's not a case of deactivation.

It was more a case of them making a new core based on Turing, cut out basically everything that makes Turing a Turing and then had no clue what to actually do with it. Or how to name it. Fucking GTX 1600. WTF NVIDIA, were you drunk naming these? Like GTX and RTX wasn't obvious difference enough, just throw in random series number coz why not. LOL

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, RejZoR said:

It was more a case of them making a new core based on Turing, cut out basically everything that makes Turing a Turing and then had no clue what to actually do with it. Or how to name it. Fucking GTX 1600. WTF NVIDIA, were you drunk naming these? Like GTX and RTX wasn't obvious difference enough, just throw in random series number coz why not. LOL

Turing still has a ton of improvements over on the CUDA side of things, much needed ones for DX12 and Vulkan.

 

The naming issues was a bit of a hole they dug themselves. If there is an RTX 2060 already then even if you keep the 20 series naming that's 3 products below that, are you and most other gamers going to rush out to buy an RTX 2010? ?. Anything below x50 has a pretty big stigma about being low end garbage that you don't want.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

Turing still has a ton of improvements over on the CUDA side of things, much needed ones for DX12 and Vulkan.

 

The naming issues was a bit of a hole they dug themselves. If there is an RTX 2060 already then even if you keep the 20 series naming that's 3 products below that, are you and most other gamers going to rush out to buy an RTX 2010? ?. Anything below x50 has a pretty big stigma about being low end garbage that you don't want.

Minor architectural changes that hardly make much of a difference. It's newer and better, but still, it hardly makes a real difference.

 

As for naming, I more had in mind GTX 2060 and RTX 2060. Same core Turing architecture, but one has no RT cores and Tensor cores and the other one does. Otherwise they would generally be the same in rasterized performance. Basically the way how it is between GTX 1660 and RTX 2060. But they make no sense name wise. RTX clearly separates ray tracing cards from the GTX ones that don't have it and that has been established with entire RTX lineup. So, I really don't see where was the need to make up whole nonsensical 1600 geneation naming scheme...

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, RejZoR said:

Minor architectural changes that hardly make much of a difference. It's newer and better, but still, it hardly makes a real difference.

It makes a big difference for anything that will be utilizing async compute and other GPU compute tasks within the game. Pascal couldn't do this properly, you can see the effects of the hardware improvements in all the reviews where Turing close that gap greatly with Vega. It's certainly not nothing it just doesn't do anything for most existing games.

 

Quote

New Streaming Multiprocessor (SM)
Turing introduces a new processor architecture, the Turing SM, that delivers a dramatic boost in shading efficiency, achieving 50% improvement in delivered performance per CUDA Core compared to the Pascal generation. These improvements are enabled by two key architectural changes. First, the Turing SM adds a new independent integer datapath that can execute instructions concurrently with the floating-point math datapath. In previous generations, executing these instructions would have blocked floating-point instructions from issuing. Second, the SM memory path has been redesigned to unify shared memory, texture caching, and memory load caching into one unit. This translates to 2x more bandwidth and more than 2x more capacity available for L1 cache for common workloads.

 

Mesh Shading
Mesh shading advances NVIDIA’s geometry processing architecture by offering a new shader model for the vertex, tessellation, and geometry shading stages of the graphics pipeline, supporting more flexible and efficient approaches for computation of geometry. This more flexible model makes it possible, for example, to support an order of magnitude more objects per scene, by moving the key performance bottleneck of object list processing off of the CPU and into highly parallel GPU mesh shading programs. Mesh shading also enables new algorithms for advanced geometric synthesis and object LOD management.

 

Variable Rate Shading (VRS)
VRS allows developers to control shading rate dynamically, shading as little as once per sixteen pixels or as often as eight times per pixel. The application specifies shading rate using a combination of a shading-rate surface and a per-primitive (triangle) value. VRS is a very powerful tool that allows developers to shade more efficiently, reducing work in regions of the screen where full resolution shading would not give any visible image quality benefit, and therefore improving frame rate. Several classes of VRS-based algorithms have already been identified, which can vary shading work based on content level of detail (Content Adaptive Shading), rate of content motion (Motion Adaptive Shading), and for VR applications, lens resolution and eye position (Foveated Rendering).

 

Texture-Space Shading
With texture-space shading, objects are shaded in a private coordinate space (a texture space) that is saved to memory, and pixel shaders sample from that space rather than evaluating results directly. With the ability to cache shading results in memory and reuse/resample them, developers can eliminate duplicate shading work or use different sampling approaches that improve quality.

 

Multi-View Rendering (MVR)
MVR powerfully extends Pascal’s Single Pass Stereo (SPS). While SPS allowed rendering of two views that were common except for an X offset, MVR allows rendering of multiple views in a single pass even if the views are based on totally different origin positions or view directions. Access is via a simple programming model in which the compiler automatically factors out view independent code, while identifying view-dependent attributes for optimal execution

https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/

https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf

 

You get all these improvements with Turing that have nothing to do with the Tensor cores or RT cores.

Link to comment
Share on other sites

Link to post
Share on other sites

Want to see the big Navi core though. 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/9/2019 at 11:10 PM, mr moose said:

 The advantage Ryzen had (the reason it is a halo product) is clearly observable.  

I think you're conflating halo product with what is just a good product. 

 

A halo product is something that is absolutely top tier in its segment, but is generally unobtainable to the vast majority of the market due to its price. Even though very few people buy it, the fact that it is the best means that a lot of people assume that the lower end, more affordable products in the same segment will be better simply due to being made by the same company. 

 

Ryzen is almost the opposite, where they provided near performance parity in most workloads (and superior performance in a few) at a more affordable price. Even Threadripper is absurdly affordable for what you get, at least in comparison to Intel's offerings. 

 

The 1950X could have been argued to be a halo product when it launched just due to it leapfrogging most of Intel's offering, and the 2990WX probably still is, but I don't see any way that mainstream Ryzen is in any sort of raw performance metric. A fantastic product and a smart buy, but there's nothing in that line that is so insanely powerful that it pulls the entire product stack upwards just due to existing. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Waffles13 said:

I think you're conflating halo product with what is just a good product. 

 

A halo product is something that is absolutely top tier in its segment, but is generally unobtainable to the vast majority of the market due to its price. Even though very few people buy it, the fact that it is the best means that a lot of people assume that the lower end, more affordable products in the same segment will be better simply due to being made by the same company. 

 

Ryzen is almost the opposite, where they provided near performance parity in most workloads (and superior performance in a few) at a more affordable price. Even Threadripper is absurdly affordable for what you get, at least in comparison to Intel's offerings. 

 

The 1950X could have been argued to be a halo product when it launched just due to it leapfrogging most of Intel's offering, and the 2990WX probably still is, but I don't see any way that mainstream Ryzen is in any sort of raw performance metric. A fantastic product and a smart buy, but there's nothing in that line that is so insanely powerful that it pulls the entire product stack upwards just due to existing. 

 

TR is Ryzen and is a Halo product.   Just because people want to only think about the 3,5 and 7 doesn't mean the TR wasn't Ryzen  It certainly was a halo product and certainly is stile part of the Ryzen lineup. 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×