Jump to content

David Wang From AMD Confirms That There Will Eventually Be an Answer to DirectX Raytracing

4 hours ago, mr moose said:

so basically it will become mainstream, they just don't have anything to offer.

 

 

Well actually they do. They have hybrid rendering support for pro cards and that's what they'll probably focus on anyway.

Besides, I wouldn't be surprised if no one has actually used an rtx card with raytracing yet, since there's still no games out with it via normal release or patches. ( if I remember right).

It's not really mature enough/ requires a huge overhaul of designs. What Nvidia has done is okay but it's not enough since performance is still supposedly murdered by Ray tracing.

Plus I have to admit, I won't be impressed until someone actually implements in hardware an efficient path guiding algorithm. You could add an AI denoiser on top of that to ease the deal, but still, without path guiding I don't see path tracing to be anything but a gimmick at 15fps.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, laminutederire said:

Well actually they do. They have hybrid rendering support for pro cards and that's what they'll probably focus on anyway.

Besides, I wouldn't be surprised if no one has actually used an rtx card with raytracing yet, since there's still no games out with it via normal release or patches. ( if I remember right).

It's not really mature enough/ requires a huge overhaul of designs. What Nvidia has done is okay but it's not enough since performance is still supposedly murdered by Ray tracing.

Plus I have to admit, I won't be impressed until someone actually implements in hardware an efficient path guiding algorithm. You could add an AI denoiser on top of that to ease the deal, but still, without path guiding I don't see path tracing to be anything but a gimmick at 15fps.

It is already in use in renderman.   And there are about 10 or 11 upcoming  titles with RT support.  I don't think that is quite the mountain people keep saying it is. 

 

https://www.digitaltrends.com/computing/games-support-nvidia-ray-tracing/

 

With all these titles slated for this year or next it doesn't look like there is any delay in it's uptake.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe the next generation graphic cores are fast enough that they can ray trace without needing dedicated ray tracing cores much like Freesync.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, laminutederire said:

Besides, I wouldn't be surprised if no one has actually used an rtx card with raytracing yet, since there's still no games out with it via normal release or patches. ( if I remember right).

ArsTechnica did, albeit with UE's tech demo: https://arstechnica.com/gadgets/2018/09/nvidia-rtx-2080-and-2080-ti-review-a-tale-of-two-very-expensive-graphics-cards/

 

Also, older DirectX versions didn't have a game that really used it for months.

 

3 hours ago, laminutederire said:

It's not really mature enough/ requires a huge overhaul of designs. What Nvidia has done is okay but it's not enough since performance is still supposedly murdered by Ray tracing.

I would argue neither was a lot of other new GPU technologies when first launched.

 

3 hours ago, laminutederire said:

Plus I have to admit, I won't be impressed until someone actually implements in hardware an efficient path guiding algorithm. You could add an AI denoiser on top of that to ease the deal, but still, without path guiding I don't see path tracing to be anything but a gimmick at 15fps.

What is this efficient path guiding algorithm?

 

1 hour ago, williamcll said:

Maybe the next generation graphic cores are fast enough that they can ray trace without needing dedicated ray tracing cores much like Freesync.

They won't be. Given the performance gap between Pascal and Turing, it would need at least three generations of improvements if GPUs continue to progress the way they did.

Link to comment
Share on other sites

Link to post
Share on other sites

other news; Intel have confirmed that they will indeed be making a 10th gen CPU line of products

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

Well, of course they will. They'll have to as ray tracing is a natural progress of graphical improvements. There is also this fact that NO game actually supports it. It has been months since RTX launch and not a single game with actual ray tracing is even available. Not even games that bragged about it during launch, were released aaaaand without RTX. Booooring. Not to mention it's mostly gimmicky effects that already look super convincing even with classic rasterization (but they intentionally dumbed it down to make difference look more dramatic). So, there's that as well.

 

Wang is right though on the ray tracing support tho. It has to be done across the board, otherwise game devs won't really see a reason to waste a lot of time doing it and then only small % of people can experience it. Because given how long it's taking for RTX which is part of GameWorks and all the fancy stuff NVIDIA has in their package for easily sticking things into games, it's still taking ages. So, it's clearly not that simple and quick to implement it.

Link to comment
Share on other sites

Link to post
Share on other sites

You know someone has hit an all time low when you read posts saying they really hope that AMD doesn't compete with a new technology and that it dies becasue they don't want nvidia to have it.    And this is not even to mention the haste with whitch people are ignoring how long it takes for ANY new tech to gain footing and pretending that RTX is somehow different.  

 

RTX and ray tracing is being taken up by game devs and render software as fast if not faster than any new GPU tech in history.  Anyone who claims otherwise has their head in the sand.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

RTX and ray tracing is being taken up by game devs and render software as fast if not faster than any new GPU tech in history.  Anyone who claims otherwise has their head in the sand.

Faster than DX12, faster than DX10, quicker than the time it took games to stop using DX9 with no DX mode higher.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, DrMacintosh said:

Maybe by the time AMD has Ray Tracing there will be a single game that supports it xD 

They have software for handling Ray Tracing (See Open RadeonRays). The real question is when they want to release native hardware support for it.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, mr moose said:

given the abominable landslide that is GCN it wouldn't surprise me if AMD have been working on it for a long time but had squat worth talking about let alone worth showing.   I imagine getting turin performance out of GCN would be like getting thread ripper performance out of tegra.

I think you have your assumptions reversed when it comes to GPU Tech. AMD is always the ones sticking tech too early into their devices. The problems with the designs around GCN is that is was already 1/3rd a Compute device. As well noted by a few, Nvidia adding Tensor then RT Cores has actually turned their base design into a significant clone of GCN. Also note, how big are the Turing parts? How expensive? That's what happens when you dedicate 1/4 of your GPU Die to something not useful in games. 

 

Nvidia is actually the slow, conservative company when it comes to GPU tech, which is why they focus so much money on Software Development. Nvidia pays top dollar for a huge GPU Driver & Software staff. That's the actual differences between the companies at a technical level.

 

What Nvidia did, this time around, given the node timing, was bring in tech they'd been studying for years when they found a professional use case for it. It's not like Ray Tracing is new or not been the "tech of the future" for 20+ years. But like EUV, Self-driving Cars or Carbon Nanotubes, there is a lot of supporting systems needed before it becomes little more than a gimmick. (VR is in the same space. Needs about 3 more years of screen development, then another 2-3 of cost decreases before it becomes economically viable for the mass market.)

 

In the case of Renderman, Pixar's core technology for decades, those RT Cores were probably put in there in large part just for Pixar and related video production work. Those companies are big customers for Nvidia, so it's worth them to use customized space within the GPU Dies for very specific types of workloads. Obviously the RT Cores have more than just Renderman that can leverage them, but that's the actual direction Nvidia is going because that's what their customers need. Nvidia can really only support 2 non-Gaming GPU die at a time at 16nm. At 7nm, that probably drops to 1 die until pretty late into the node. (Volta's replacement on 7nm drops mid-2019.)

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, mr moose said:

It is already in use in renderman.   And there are about 10 or 11 upcoming  titles with RT support.  I don't think that is quite the mountain people keep saying it is. 

 

https://www.digitaltrends.com/computing/games-support-nvidia-ray-tracing/

 

With all these titles slated for this year or next it doesn't look like there is any delay in it's uptake.

Well we should already have it on shadow of the time raider or battlefield v but we all know that hasn't came along yet. For the other games we don't really know if they'll have it at launch of later. If so, how much later?

That's the first thing.

The second, Nvidias current implementation is so What irrelevant anyway, it's ultra taxing but does not provide much effects. Hence why I don't think it'll catch on as is.

4 hours ago, M.Yurizaki said:

ArsTechnica did, albeit with UE's tech demo: https://arstechnica.com/gadgets/2018/09/nvidia-rtx-2080-and-2080-ti-review-a-tale-of-two-very-expensive-graphics-cards/

 

Also, older DirectX versions didn't have a game that really used it for months.

 

I would argue neither was a lot of other new GPU technologies when first launched.

 

What is this efficient path guiding algorithm?

 

They won't be. Given the performance gap between Pascal and Turing, it would need at least three generations of improvements if GPUs continue to progress the way they did.

Basically path guiding algorithms are algorithms which enforce a less random bouncing to capture where the most radiance actually comes from. The way it works is that while you render you can teach itself to find the rays with more importance in the scene. This allows for drastically faster convergence at an equivalent computing budgeting. There have been some early work on it that I think should be implemented and refined in tandem with an architecture made for it, so that we can actually expect interesting effects impossible otherwise such as good reflections and refractions with multiple bounces, caustics, as well as way faster shadows. I can send you references of publications on the matter if you want.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, AluminiumTech said:

They have software for handling Ray Tracing (See Open RadeonRays). The real question is when they want to release native hardware support for it.

The dirty detail with Ray Tracing is that it's not really going to improve visual quality in gaming until you get to the higher resolutions. Which means it isn't 1080p gaming but 4K gaming we're talking about. Given how the 2080 Ti is handling 1080p Ray Tracing, there probably needs to be a 10x performance increase in the technology before it's really viable for 4K gaming.

 

There's a reason it was expected that Ray Tracing was tech for 2027 and beyond, given the improvement trends in GPU capability. Though, given how far into MCM AMD has gotten, shouldn't the assumption be they'll just add specialized Chiplet when it becomes important?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Taf the Ghost said:

The dirty detail with Ray Tracing is that it's not really going to improve visual quality in gaming until you get to the higher resolutions. Which means it isn't 1080p gaming but 4K gaming we're talking about. Given how the 2080 Ti is handling 1080p Ray Tracing, there probably needs to be a 10x performance increase in the technology before it's really viable for 4K gaming.

 

There's a reason it was expected that Ray Tracing was tech for 2027 and beyond, given the improvement trends in GPU capability. Though, given how far into MCM AMD has gotten, shouldn't the assumption be they'll just add specialized Chiplet when it becomes important?

This was something that I was thinking about, once AMD get to the point of having super modular GPUS, they can kinda pic and choose compute, RT, and precision. If they need to maximise for Graphics workloads they can, need a bunch of high precision compute, sure. Need lots and lots of low precision for inference workloads, done! We live in exciting times. 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Taf the Ghost said:

 

It can improve image quality even at small resolutions to allow for refractions/reflections/soft shadows/caustics/natural light bleeding/ more natural volumetric effects and so on. Issue is that the way it's done now is inherently wrong and focuses only on small effects that aren't that impressive.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, laminutederire said:

It can improve image quality even at small resolutions to allow for refractions/reflections/soft shadows/caustics/natural light bleeding/ more natural volumetric effects and so on. Issue is that the way it's done now is inherently wrong and focuses only on small effects that aren't that impressive.

Correct. Visual Quality in the internum is about Lightning Engines & Texture Quality. Why a lot of the big visual bumps have happened recently and why slightly older cards seem to be crippled in newer games, as those textures just require extremely high memory bandwidth. 

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, Ben Quigley said:

This was something that I was thinking about, once AMD get to the point of having super modular GPUS, they can kinda pic and choose compute, RT, and precision. If they need to maximise for Graphics workloads they can, need a bunch of high precision compute, sure. Need lots and lots of low precision for inference workloads, done! We live in exciting times. 

AMD's GPU division is functionally the industry's Semi-Custom GPU Solution, at this point, which is why when Epyc was confirmed to be a MCM design, it was pretty obvious where the GPU division was going to head. However, full Chiplet/Controller Dies showed up a lot earlier than everyone was expecting. The same approach will work for GPUs as it does on CPUs. (Frankly, it should be easier on GPUs, given their structure.)

 

Though, now with Chiplet established on CPU, it does explain the rumblings that AMD's GPUs are going back to VLIW instructions. It's actually "GCN Forever" at AMD, as they're going to be running hybrid devices forever, it's just that rather than being all on the same GPU die, specialized hardware will be on chiplets for high-end compute customers. Though that's probably 2022/2023 before anything like that will be ready.

 

Still, given what they announced with new Radeon Instinct MI60 parts, they could already put two of those on board and wire them together. Between the high-bandwidth bridge, IF over PCIe and auto-configuration into a "GPU Ring", the only thing missing is having a reason to do it. (Power & Cooling would be the draw backs.) Zen1 was stepping stone from Monolithic to first level MCM. Zen2 is the next step into on-package Chiplet. Zen4, it seems, will be on-substrate Chiplet. That means Vega, with the first IF links, was the first baby step towards chiplet for AMD's GPU division. Vega 20 was the next small step, laying the entire technology base at the production level. Navi isn't an compute design, so it's likely to not have anything very interesting changing. The next step should come with AMD's next true architecture.

 

It'll be interesting to see if Nvidia rushes there first, as they have the money to throw at that, which could make the early 2020s a fascinating time for GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

It does make sense to get to proper performance for it. Cause extra fidelity on a much lower framerate and resolution even is nut attractive. Specially when that is with a flagship too. 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Taf the Ghost said:

(VR is in the same space. Needs about 3 more years of screen development, then another 2-3 of cost decreases before it becomes economically viable for the mass market.)

Even then until they get over the use limit most people have before they get nauseous it will continue to be limited.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, ravenshrike said:

Even then until they get over the use limit most people have before they get nauseous it will continue to be limited.

That's what the screen development is needed for.

Link to comment
Share on other sites

Link to post
Share on other sites

By the time AMD has ray tracing on all of their cards from low end to high end, Nvidia will be so far ahead, there won't be any competition from AMD. AMD should probably give up on their GPUs and just focus on their CPUs, and let Intel compete with Nvidia. Can't wait to see what Intel has up it's sleeves, with their GPU project Arctic Sound.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

The position of AMD on implementing hardware RT support is quite logical. Since the D3D12 Raytracing Fallback Layer will be depreciated soon, the device driver will handle the exposure of the RT capabilities -- through compute emulation or dedicated hardware (RTX). Both options will qualify supported GPUs as DXR-capable in the future. The Fallback Layer in the API was there simply to aid the initial development.

If the cost-benefit analysis permits, AMD might include some HW acceleration in the future designs. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Taf the Ghost said:

Nvidia is actually the slow, conservative company when it comes to GPU tech, which is why they focus so much money on Software Development. Nvidia pays top dollar for a huge GPU Driver & Software staff. That's the actual differences between the companies at a technical level.

Eh? NVIDIA has pushed a lot of things getting GPU technology out there first to consumers and professionals alike. The only reason why they would also spend a lot of money in software is that's how you make your hardware useful in the first place.

 

EDIT: Basically, if an application developer spends too much time in system software land to figure out how to make the hardware do things, the hardware company failed to do its job.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, M.Yurizaki said:

Eh? NVIDIA has pushed a lot of things getting GPU technology out there first to consumers and professionals alike. The only reason why they would also spend a lot of money in software is that's how you make your hardware useful in the first place.

 

EDIT: Basically, if an application developer spends too much time in system software land to figure out how to make the hardware do things, the hardware company failed to do its job.

On the hardware side, Nvidia is normally been second to a node shrink, their tech has generally not adopted new standards quickly and they've generally kept the lower-tier part of the Market lagged in technology.  On the software side of things, they've spent extremely large amounts of money creating an ecosystem that finally started paying off in 2016. There's a reason I differentiated the two aspects of the company.

 

The GPU Hardware side of things works in the Juggernaut Silicon Valley approach of taking regular steps and using marketing to patch up the problems while also doing underhanded or highly anti-consumer actions to keep their position. I.e. the part of the company that is very much like Intel. The Software side of Nvidia is something like the "Home for Wayward Software Developers" with just how many custom systems they roll out to act as Middleware for Customers. You can also get an idea of which chunks of Middleware are clearly directed by the other side of the company, given how generally badly they work. It really doesn't look like Gameworks gets a lot of love from Development staff.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Taf the Ghost said:

I think you have your assumptions reversed when it comes to GPU Tech. AMD is always the ones sticking tech too early into their devices. The problems with the designs around GCN is that is was already 1/3rd a Compute device. As well noted by a few, Nvidia adding Tensor then RT Cores has actually turned their base design into a significant clone of GCN. Also note, how big are the Turing parts? How expensive? That's what happens when you dedicate 1/4 of your GPU Die to something not useful in games. 

And yet here is AMD essentially claiming they have nothing to compete with Nvidia but admitting RT will be a thing.  My point is not an assumption, but that GCN is dead and AMD have had almost a decade to prove otherwise, instead where are they up to? a rehash of the 580?

9 hours ago, Taf the Ghost said:

Nvidia is actually the slow, conservative company when it comes to GPU tech, which is why they focus so much money on Software Development. Nvidia pays top dollar for a huge GPU Driver & Software staff. That's the actual differences between the companies at a technical level.

 

What Nvidia did, this time around, given the node timing, was bring in tech they'd been studying for years when they found a professional use case for it. It's not like Ray Tracing is new or not been the "tech of the future" for 20+ years. But like EUV, Self-driving Cars or Carbon Nanotubes, there is a lot of supporting systems needed before it becomes little more than a gimmick. (VR is in the same space. Needs about 3 more years of screen development, then another 2-3 of cost decreases before it becomes economically viable for the mass market.)

 

In the case of Renderman, Pixar's core technology for decades, those RT Cores were probably put in there in large part just for Pixar and related video production work. Those companies are big customers for Nvidia, so it's worth them to use customized space within the GPU Dies for very specific types of workloads. Obviously the RT Cores have more than just Renderman that can leverage them, but that's the actual direction Nvidia is going because that's what their customers need. Nvidia can really only support 2 non-Gaming GPU die at a time at 16nm. At 7nm, that probably drops to 1 die until pretty late into the node. (Volta's replacement on 7nm drops mid-2019.)

Regardless of why you think nvidia are where they are, the fact is it's an observation of the past not an assumption, Nvidia are here and now with a tech that is being taken up very fast and that is causing the industry to get excited.  

 

8 hours ago, laminutederire said:

Well we should already have it on shadow of the time raider or battlefield v but we all know that hasn't came along yet. For the other games we don't really know if they'll have it at launch of later. If so, how much later?

Even if they don't come out till next year, it's still faster uptake than any other new tech,  what's your point supposed to be?

8 hours ago, laminutederire said:

That's the first thing.

The second, Nvidias current implementation is so What irrelevant anyway, it's ultra taxing but does not provide much effects. Hence why I don't think it'll catch on as is.

Calling it taxing and irrelevant doesn't actually make it so.   Gotta ask why it is so important to you that Nvidia not be successful or that the RTX is not what it is?  You seem to be arguing against the evidence.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Taf the Ghost said:

On the hardware side, Nvidia is normally been second to a node shrink, their tech has generally not adopted new standards quickly and they've generally kept the lower-tier part of the Market lagged in technology. 

Node size doesn't mean anything if it delivers the results. I'll give you the standards part on the fact NVIDIA likes to be proprietary, but CUDA is still really big in the HPC market and they've been pushing their RT API for both DirectX and Vulkan.

 

As far as the lower tier goes, AMD/ATI is also guilty of this. I'd argue AMD is lagging even further bringing current gen tech to lower tiers because the desktop GeForce 10 is the first time in a long time any discrete GPU manufacturer used a current architecture across the board. AMD is still peddling first gen GCN chips for bottom tier parts. Even in NVIDIA's mobile space, you have to go to "why is there even a dGPU on this thing?" tier before hitting a last gen part.

 

But otherwise NVIDIA has done a lot of innovating things in the past. And a lot of those succeeded, just not in the market we care about.

Quote

The GPU Hardware side of things works in the Juggernaut Silicon Valley approach of taking regular steps and using marketing to patch up the problems while also doing underhanded or highly anti-consumer actions to keep their position. I.e. the part of the company that is very much like Intel.

And at the same time I ask... What does AMD have to offer? GameWorks doesn't only bring a suite of software libraries, but NVIDIA's direct help along with test systems. The most I gathered what AMD was offering was help optimizing code on a general level rather than specific optimizations for their hardware. I mean, it's great to get help across the board but as a developer I would've preferred how to work on the hardware specifically.

 

AMD can complain all they want, but if they don't have something competitive they can offer in return (they sort of do, I guess?), then I can't take their complaints seriously. It's like the recent kerfluffle with Mercedes using a new wheel design in F-1 and the other teams crying foul over it because they see it as an unfair competitive edge.

 

Quote

The Software side of Nvidia is something like the "Home for Wayward Software Developers" with just how many custom systems they roll out to act as Middleware for Customers. You can also get an idea of which chunks of Middleware are clearly directed by the other side of the company, given how generally badly they work. It really doesn't look like Gameworks gets a lot of love from Development staff.

If only because NVIDIA is trying to keep a tight ship about it. Though I've heard they opened up after receiving enough complaints.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×