Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Toe to toe - 6800/6800XT reviews out

11 hours ago, AluminiumTech said:

I've used both Nvidia and AMD encoders in the past and both look really bad at anything below 20Mbps.

 

Both need ideally 50Mbps to look great.

Dude, what are you talking about? 

Nvidia's encoder is fantastic. It got a massive upgrade with the 20 series and often beats x264 and x265. 

 

https://unrealaussies.com/tech/nvenc-x264-quicksync-qsv-vp9-av1/

Here are the H.264 results. 

Showdown-x264.thumb.jpg.f9f2f071a4a7f3cb78a7172fd975a571.jpg

 

You have to run x264 at something slower than the slow preset to be competitive with Pascal's NVenc, and I doubt anyone is doing that for streaming. It requires way too much CPU power. 

Link to post
Share on other sites
21 hours ago, xAcid9 said:

image.png.41969d205c15a8d83ab9cc3eb1c370e7.png

 

In before power consumption doesn't matter. 
because Nvidia lost in power consumption this time.

Techpowerups Power draw numbers seem a bit strange to me. They claim the 6800 draws 165W on average and have the 6800XT at 210W. That's 80W less than what other reviewers measure for average gaming power draw. That's a huge discrepancy.

Ok so turns out Techpowerup uses Metro Last light at 1080p for their Power draw testing. A title from 2013... at 1080p.
Also they use the 1080p power draw data and the 4k average gaming performance data to then calculate the 4k performance per watt data. That seems very flawed to me.

The 6000 series is still a very efficient GPU don't get me wrong, but the 6800 is not 30%-40% more efficient on average. Only in CPU bound scenarios where RDNA2 seems to have pretty a good power management. 

 

Techpowerups own 1440p and 4k Power draw data for the 6800XT, additional tests after some questiosn on their forum:
https://www.techpowerup.com/forums/proxy.php?image=https%3A%2F%2Fimg.techpowerup.org%2F201118%2Fpdcvvx8dau.jpg&hash=57c297238fa23a32344e973696691d58

Intel Core i5-8400 / ASRock Z370 Pro 4 / Hyper 212 Evo / 16GB DDR4 @3000 / MSI RTX 2070 Armor / Corsair RM550x / SanDisk 250GB / 1TB WD HDD / Fractal Design Define R4

Link to post
Share on other sites

Actually contrary to popular belief, 1080p in older games is often more taxing on power because of stupid high framerate opposed to 4K with average 60fps... Quickly churning out 200+ frames is more demanding on GPU than 4K at 60fps in terms of power it needs to consume, despite 4K requiring GPU to compute more pixels on a single frame. But less pixels in same timeframe in total through all frames.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3600MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES

Link to post
Share on other sites
35 minutes ago, RejZoR said:

Actually contrary to popular belief, 1080p in older games is often more taxing on power because of stupid high framerate opposed to 4K with average 60fps... Quickly churning out 200+ frames is more demanding on GPU than 4K at 60fps in terms of power it needs to consume, despite 4K requiring GPU to compute more pixels on a single frame. But less pixels in same timeframe in total through all frames.

not in Metro last light. The 4k and 1440p consumption is considerably higher
https://www.techpowerup.com/forums/proxy.php?image=https%3A%2F%2Fimg.techpowerup.org%2F201118%2Fpdcvvx8dau.jpg&hash=57c297238fa23a32344e973696691d58

Intel Core i5-8400 / ASRock Z370 Pro 4 / Hyper 212 Evo / 16GB DDR4 @3000 / MSI RTX 2070 Armor / Corsair RM550x / SanDisk 250GB / 1TB WD HDD / Fractal Design Define R4

Link to post
Share on other sites
15 hours ago, leadeater said:

You talking about CUDA and application support?

 

Well they are both bad, I've only seen two games tested on the 6800XT for Ray Tracing (Hardware Unboxed) and it was about a 2080 Ti or the other game much better. Either way not a feature I'll be using for a while. As a feature just not impressive enough in games yet so overall no point using it, maybe in 1-2 years from now.

Yeah specifically talking about CUDA app support.

 

Re: Raytracing - it's not like the AMD cards are bad since RTX 2080S+ DXR performance is good; but you can get something better for the around the same price. AMD's DLSS alternative should decrease the margin though.

 

I also stream and NVIDIA's software suite and encoder is so much better. AMD apparently does very well with H265 so if that actually gets streaming support then they have a real competitor.

Link to post
Share on other sites
13 hours ago, AluminiumTech said:

At 1440p it does at least lean strongly towards AMD.

Not even the hardware unboxed results you mentioned earlier showed that. It's low single digit % difference on average. It is not make or break, or likely to tip someone either way in itself.

 

13 hours ago, AluminiumTech said:

Okay but again a not well supported feature by game developers.

Features have to start somewhere. It is constantly being added to.

 

13 hours ago, AluminiumTech said:

There were some games where Nvidia has much better perf because they're Nvidia sponsored titles. Other titles tended to do better on AMD.

Both sides will provide some extra level of support to some developers. This is nothing new that as a result some games may work relatively better on one side than the other. That's in part why I'm more interested in the average as a more general representation of the gaming market, not focusing on a few titles that might do much better or worse.

 

13 hours ago, AluminiumTech said:

1080p is more up in the air because reviewers in quite a few cases were CPU bottlenecked.

And in that scenario IMO it doesn't matter who is slightly ahead. You've dismissed 4k as a niche, and I'd put ultra high fps in a similar niche. How many play with displays beyond 144 Hz say? Probably not that many.

 

13 hours ago, AluminiumTech said:

AMD's been talking about low latency gaming for a while and has continually added features to improve this over time and has even improved it on existing hardware through new implementations of existing features.

I think at this point, we need someone to do side by side testing of both nvidia's and AMD's low latency solutions and see what they do in the real world. 

 

13 hours ago, AluminiumTech said:

Nvidia's compatibility with Freesync is hit or miss and it isn't the same.

That's just Freesync normally. The lack of quality standards in it resulted in many bad monitors, although I'm not aware of any significant problems in higher end models.

 

13 hours ago, AluminiumTech said:

it has 16GB VRAM, it has much better power consumption charactertics.

On the VRAM side, it still remains to be seen how many games make effective use of that going forwards. Devs will not neglect running decently on say 8GB class GPUs since that will still be a vast majority of the market. It is a question of how much it makes to the top end. Also it remains to be seen if devs also implement the IO thing which may help close the gap for lower VRAM cards. This will be a requirement on the next gen consoles since their total shared ram (system+vram) will be less than a high end PC gaming system.

 

13 hours ago, AluminiumTech said:

The only thing runaway about the 3080 is its power consumption which if left unrestricted would go well beyond 400W.

Compare like with like, the routine running power of Ampere is higher, no question there, but if you're going to unrestrict Ampere, what would navi do similarly? It'll likewise go up, even if not necessarily to the same level. For example, when looking at overclocking, which is when you'd exceed stock power limits, I'd like to compare under equivalent cooling to keep it fair. 

 

13 hours ago, AluminiumTech said:

I see everything except ray tracing good enough or even great.

I agree with that, but I don't see the AMD win you seem to. 

Desktop Gaming system: Asrock Z370 Pro4, i7-8086k, Noctua D15, Corsair Vengeance Pro RGB 3200 4x16GB, Gigabyte 2070, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync
TV Gaming system: Gigabyte Z490 Elite AC, i5-10600k, Noctua D15, Kingston HyperX RGB 4000@3600 2x8GB, EVGA 2080Ti Black, EVGA 850W, Corsair 230T, Crucial P1 1TB + MX500 1TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Asus Strix 1080Ti, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Former Main system: Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte GTX 1650, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Asus FX503VD, i5-7300HQ, DDR4 2133 2x8GB, GTX 1050, Sandisk 256GB + 480GB SSD [link]


 

Link to post
Share on other sites
2 hours ago, Medicate said:

Techpowerups Power draw numbers seem a bit strange to me. They claim the 6800 draws 165W on average and have the 6800XT at 210W. That's 80W less than what other reviewers measure for average gaming power draw. That's a huge discrepancy.

Depend on what tools and scenario they used to measure power consumption.

 

2 hours ago, Medicate said:

Ok so turns out Techpowerup uses Metro Last light at 1080p for their Power draw testing. A title from 2013... at 1080p.

Yes, they've been doing this for years for consistency. You can check their Furmark result for max out power consumption.

 

2 hours ago, RejZoR said:

Actually contrary to popular belief, 1080p in older games is often more taxing on power because of stupid high framerate opposed to 4K with average 60fps... Quickly churning out 200+ frames is more demanding on GPU than 4K at 60fps in terms of power it needs to consume, despite 4K requiring GPU to compute more pixels on a single frame. But less pixels in same timeframe in total through all frames.

Not really, you're often hit CPU/memory bottleneck in high fps gaming. 
CPU bottleneck = Less work for the GPU

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to post
Share on other sites
2 hours ago, RejZoR said:

Actually contrary to popular belief, 1080p in older games is often more taxing on power because of stupid high framerate opposed to 4K with average 60fps... Quickly churning out 200+ frames is more demanding on GPU than 4K at 60fps in terms of power it needs to consume, despite 4K requiring GPU to compute more pixels on a single frame. But less pixels in same timeframe in total through all frames.

Then Radeon Chill and cap it at 60fps

That's what i used to do with my 390

 

Glad to see that it only sip power at 1080p, ima be getting a silent setup finally (Not that it was that bad with the 390, but if it can get quiter, i'll take it. Probably will just have to replace some fans in my case afterwards)

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Link to post
Share on other sites
6 hours ago, LAwLz said:

Snip

I’m talking about visual quality. Performance wise it performs fine but it looks like garbage to me.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill

iPhone 8 Plus (Mid 2019 to present)

Samaritan XTX (Early 2021 Upgrade - AMD Ryzen 9 3900XT (12C/24T)  (2021) , MSI X370 Gaming Pro Carbon, Corsair 32GB Vengeance LPX DDR4-2666 (2020) ,  Asus ROG Strix RX Vega 56 , Corsair RM850i PSU, Noctua NH-D15 CPU Cooler (2021), Samsung 860 EVO 500GB SSD, Seagate BarraCuda 6TB HDD (2020) , NZXT S340 Elite, Corsair ML 120 Pro, Corsair ML120 x2 (2021)

Link to post
Share on other sites
3 hours ago, RejZoR said:

Actually contrary to popular belief, 1080p in older games is often more taxing on power because of stupid high framerate opposed to 4K with average 60fps... Quickly churning out 200+ frames is more demanding on GPU than 4K at 60fps in terms of power it needs to consume, despite 4K requiring GPU to compute more pixels on a single frame. But less pixels in same timeframe in total through all frames.

More transistor work and less stalls for cache hits. Depends, though, the load and the architecture a bit, but generally high FPS will put a heavier power draw than higher resolution.

Link to post
Share on other sites
2 hours ago, ONOTech said:

Yeah specifically talking about CUDA app support.

 

Re: Raytracing - it's not like the AMD cards are bad since RTX 2080S+ DXR performance is good; but you can get something better for the around the same price. AMD's DLSS alternative should decrease the margin though.

 

I also stream and NVIDIA's software suite and encoder is so much better. AMD apparently does very well with H265 so if that actually gets streaming support then they have a real competitor.

Nvidia's one critical advantage over AMD in the GPU space is doing the front-facing, consumer stuff better. Shadowplay matters because it's good enough and you can just turn it on. ReLive has spent so much time being buggy that we won't hear if it's working well for probably a year after they get it fixed.

 

As for Raytracing general, the sudden return of the Microsoft-client Minecraft Path-tracing feels less like testing and far more like a long-running feud with AMD NA PR. Yes, we get it. They were run by jerks and pissed off everyone in the NA Tech Press for years. But pulling out something obtuse as a test makes the reviewers look incompetent.  Both NVIDIA & AMD clearly designed their RT approach this generation about baseline adoption rather than actual high-performance. AMD's works fine and it'll get developed for due to the consoles. Nvidia only made Path Tracing better (being core for core unchanged otherwise), which will look better in the future as games will head in that direction, even if it's never really capable of actually performing anything but tech demos well with it. It'll be a few weeks to see if AMD's approach doesn't cause the pipeline stalls & stuttering that Nvidia's will tend to.

 

Raster performance, GPU driver stability and, these days, easy streaming are what matter to the end-users. Nvidia in a bit of a panic mode because they messed up the first part when you look at Pascal -> Turing -> Ampere progression. They're banking on 2 & 3 to keep them from losing sales for the next 4 months.

Link to post
Share on other sites
49 minutes ago, AluminiumTech said:

I’m talking about visual quality. Performance wise it performs fine but it looks like garbage to me.

What I posted was VMAF...

If you don't understand what that means let me explain. The visual qualify NVenc puts out is better than x264 on the slow preset when both are using the same bitrate.

 

If you encode a lossless video using Turing's H.264 encoder with an 8Mbps output, the perceived visual quality loss will be about 11%.

If you used x264 with the slow preset and output an 8Mbps file then the perceived visual quality loss will be about 13%.

Turing's encoder outperforms x264 unless you go into the really slow presets like veryslow or placebo.

Turing's encoder is fantastic. It is absolutely amazing. You basically have 0 reason to not use it if you got a 20 series GPU and want to encode video. For HEVC it matches x265 at the slow preset too.

 

Or as the article puts it:

Quote

Essentially, Turing is in such a good place now with their RTX 2080TI, 2080, 2070 and 2060 cards. The NVENC encoders for both H.264 AVC and H.265 HEVC are simply excellent. It’s the best H.264 encoder around now. Basically because it’s the newest at the time of writing, and x264 has been optimised into the ground. x264 will not get much better as an encoder, and will only increase speed with CPU improvements. Turing NVENC is already beating it on Slow for this test video, there’s no reason not to use it.

 

My prediction is that Turing will only be toppled as the all-round king of encoders when more sites make the switch to VP9 and the race starts again. VP9 does have an interesting ability to enforce real-time encoding at a set CPU usage. This will greatly ease the transition for average streamers by removing the need to find their perfect preset. But that is a story for another day.

 

 

Edit: got Turing and Pascal mixed up in my writing. I previously said Pascal's encoder was fantastic but I meant Turing's encoder.

Link to post
Share on other sites

The encoder and gsync support for my monitor are the 2 reasons I'm heavily leaning towards nvidia for a GPU upgrade rather than AMD. While I do still applaud AMD's progress!!

Link to post
Share on other sites
6 hours ago, ONOTech said:

AMD's DLSS alternative should decrease the margin though.

I'm inclined not to count on this, either it'll be visually not as good or performance gain not as good. I don't really think AMD has the software team to execute as well as Nvidia yet, AMD needs to try and win just the track and field sprint events rather than trying to win the pentathlon.

Link to post
Share on other sites
11 minutes ago, leadeater said:

I don't really think AMD has the software team to execute as well as Nvidia yet.

I wouldn't be surprised if msft, sony and AMD worked on that.

Link to post
Share on other sites
22 minutes ago, leadeater said:

I'm inclined not to count on this, either it'll be visually not as good or performance gain not as good. I don't really think AMD has the software team to execute as well as Nvidia yet, AMD needs to try and win just the track and field sprint events rather than trying to win the pentathlon.

I am impressed enough so far that they caught up as much as they did, allow them some wiggle room, it's been a long time since they were even really competitive.

 

But i can't help thinking that we are being played, Lisa Su is apparently a (scratch that) to Jensen, it would not surprise me if they were in on something private together where AMD climbs and they collaborate to make each other look good whilst pretending they are separate, but that's just a conspiracy theory i made up in my head.

 

https://www.techtimes.com/articles/253736/20201030/fact-check-nvidia-ceo-uncle-amds-dr-lisa-su.htm#:~:text=Not technically "uncle" but still,they are very close relatives.

My computer for gaming & work. AMD Ryzen 3600x with XFR support on - Arctic Cooling LF II - ASUS Prime X570-P - Gigabyte 5700XT - 32GB Geil Orion 3600 - Crucial P1 1TB NVME - Crucial BX 500 SSD - EVGA GQ 650w - NZXT Phantom 820 Gun Metal Grey colour - Samsung C27FG73FU monitor - Blue snowball mic - External best connectivity 24 bit/ 96khz DAC headphone amp -Pioneer SE-205 headphone - Focal Auditor 130mm speakers in custom sealed boxes - inPhase audio XT 8 V2 wired at 2ohm 300RMS custom slot port compact box - Vibe Audio PowerBox 400.1

Link to post
Share on other sites
3 hours ago, leadeater said:

I'm inclined not to count on this, either it'll be visually not as good or performance gain not as good. I don't really think AMD has the software team to execute as well as Nvidia yet, AMD needs to try and win just the track and field sprint events rather than trying to win the pentathlon.

That's fair. I could be wrong here, but I'm assuming AMD will bank on Microsoft's software team to implement some sort of DX12 solution that AMD can use. Microsoft definitely has the resources to do so. I don't think it will be nearly as good as DLSS 2.0 though

Link to post
Share on other sites
3 hours ago, leadeater said:

I'm inclined not to count on this, either it'll be visually not as good or performance gain not as good. I don't really think AMD has the software team to execute as well as Nvidia yet, AMD needs to try and win just the track and field sprint events rather than trying to win the pentathlon.

Yeah, Nvidia most definitely have more experience with regards to AI, not just in hardware terms but also in developing AI solutions themselves. Maybe if AMD brings Microsoft in they'll be able to compete there? But I wouldn't be surprised if that still wasn't enough.

 

On top of that, the AMD cards will have to run that AI on the general purpose compute units, rather than in dedicated tensor cores like DLSS. While nowhere near as intensive as training, evaluating an AI model (ie. making it do what it's meant to do) isn't computationally free - it will take up some GPU cycles that could otherwise be used for rendering. Even if you were to run the exact same DLSS 2.0 model on an AMD card, you wouldn't see the same performance improvements as you do on RTX cards because of this, so AMD's model will need to be better than Nvidia's just to equal it's effectiveness in gaming.

 

I'm also worried that AMD's solution, because of this performance penalty, will end up being very much like DLSS 1.0. A common compromise in AI is between evaluation performance and model accuracy - better models are generally harder to evaluate - so I'm concerned that AMD will sacrifice too much in terms of accuracy (how good the frames look) in order to meet the fps offered by their competitor.

 

I'm genuinely interested in AMD's DLSS competitor - I think the technology is fascinating - but I am worried at how it will end up being implemented and I'm especially worried regarding compatability with DLSS 2.0 games. I really don't want there to be a situation where games are either accelerated by one model or the other, or that compatibility for each has to be implemented separately. That's not the kind of bs we need, and it could be damaging to the uptake of these algorithms in general. But getting them to agree on something will be a tough fight given Nvidia's solution has been out in the wild for so long.

My PCs:

Quote

Timothy: 

i7 4790k

16GB Corsair Vengeance DDR3

ASUS GTX 1060 6GB

Corsair Carbide 300R

 

Link to post
Share on other sites

Overall, pretty nice. Glad to see them returning with a flagship and eventually have a complete new series.

Ryzen 7 3800X | X570 Aorus Elite | G.Skill 16GB 3200MHz C16 | Radeon RX 5700 XT | Samsung 850 PRO 256GB | Mouse: Zowie S1 | OS: Windows 10

Link to post
Share on other sites

@tim0901

It doesn't matter. Rendering at 1080p and upscaling with sharpening to 4K is still less taxing than rendering at native 4K, resulting in net gain of performance even if still worse than 1080p. Not to mention, if resizing and sharpening is done right it's possible it would even negate the need to use anti-aliasing as upscaling would sort of do that by itself.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3600MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES

Link to post
Share on other sites
6 hours ago, tim0901 said:

Yeah, Nvidia most definitely have more experience with regards to AI, not just in hardware terms but also in developing AI solutions themselves. Maybe if AMD brings Microsoft in they'll be able to compete there? But I wouldn't be surprised if that still wasn't enough.

 

On top of that, the AMD cards will have to run that AI on the general purpose compute units, rather than in dedicated tensor cores like DLSS. While nowhere near as intensive as training, evaluating an AI model (ie. making it do what it's meant to do) isn't computationally free - it will take up some GPU cycles that could otherwise be used for rendering. Even if you were to run the exact same DLSS 2.0 model on an AMD card, you wouldn't see the same performance improvements as you do on RTX cards because of this, so AMD's model will need to be better than Nvidia's just to equal it's effectiveness in gaming.

 

I'm also worried that AMD's solution, because of this performance penalty, will end up being very much like DLSS 1.0. A common compromise in AI is between evaluation performance and model accuracy - better models are generally harder to evaluate - so I'm concerned that AMD will sacrifice too much in terms of accuracy (how good the frames look) in order to meet the fps offered by their competitor.

 

I'm genuinely interested in AMD's DLSS competitor - I think the technology is fascinating - but I am worried at how it will end up being implemented and I'm especially worried regarding compatability with DLSS 2.0 games. I really don't want there to be a situation where games are either accelerated by one model or the other, or that compatibility for each has to be implemented separately. That's not the kind of bs we need, and it could be damaging to the uptake of these algorithms in general. But getting them to agree on something will be a tough fight given Nvidia's solution has been out in the wild for so long.

Basically nothing runs on the actual Tensor Cores. They're for server rendering. The ray traversal units are packaged with the Tensor Cores, but they don't actually use the Tensor Cores. DLSS is a customized upscaler routine. It doesn't need any part of the expanded units. It's a lot like the "RTX Voice" that doesn't need any part of RTX to run. It's just marketing.

 

Also, even DLSS 2.0 can be mostly mimicked by running at about a 70% render resolution and a sharpening filter. Nvidia had to badly over-engineer a solution to a problem they created for themselves.

Link to post
Share on other sites
14 minutes ago, Taf the Ghost said:

Also, even DLSS 2.0 can be mostly mimicked by running at about a 70% render resolution and a sharpening filter. Nvidia had to badly over-engineer a solution to a problem they created for themselves.

Mmm these sour grapes sure are tasty. 

Link to post
Share on other sites
12 hours ago, leadeater said:

I'm inclined not to count on this, either it'll be visually not as good or performance gain not as good. I don't really think AMD has the software team to execute as well as Nvidia yet, AMD needs to try and win just the track and field sprint events rather than trying to win the pentathlon.

idk look what they've done for the consoles. I assume not all them but they should have most of that work.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 2.0: R7 2700 @4.0ghz, B450m Steel Legends, H105, 4x8gb Geil EVO 2866, XFX RX 580 8GB, Corsair RM750X, 500 gb 850 evo, 500gb 850 pro and 5tb Toshiba x300

Skunkworks: R5 3500U, 16gb, 250 intel 730, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Bearcat (F@H box) core 2 duo, 1x4gb EEC DDR2, 250gb WD blue, 9800GTX+, STRIX 660ti, supermicro PSU, dell T3400.

Rappter(unfinished compute server) HP DL380G6 2xE5520 24GB ram with 4x146gb 10k drives and 4x300gb 10K drives, running NOTHING can't get anything to work

Spirt  (unfinished NAS) Cisco Security Multiservices Platform server e5420 12gb ram, 1x6 1tb raid 6 for plex + Need funding 16+1 2tb raid 6 for mass storage.

PSU Tier List      Motherboard Tier List      How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."  @CircleTech

Camera Gear: Canon SL2, 60D, T5, 24-105 F, 50mm F1.4, 75-300 III, rokinon 25 T1.5, Helios44-m, Sony FS700R, 2 Cos-11D lavs

Link to post
Share on other sites
2 hours ago, GDRRiley said:

idk look what they've done for the consoles. I assume not all them but they should have most of that work.

Not really, that's all pretty well Sony and Microsoft doing the software leg work. AMD already has a hard enough time to get their software features adopted in games and overall has worse partnerships than Nvidia. For something so new and not done by them before AMD just doesn't have the execution track record to believe it won't be anything more than AMD releasing it but overall lacking the required polish for a very long time. It'll work, but that can be said about all of their features and software they release but working and being something you actually want to use are often too distant.

 

Outside of graphics effects features there is zero AMD software etc I use, I don't really want anymore anyway but my AMD GPUs are used strictly for their basic functionality.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×