Jump to content

AMD announces latest step to help mature HSA; partners with Microsoft

nicehat

And yet they're stupid enough to give their edge away (giving Mantle to Kronos).

This is something good for the industry which we should be celebrating. Do you want proprietory wars between Nvidia and AMD?

 

Mantle can continue to exist as an experimental API where they can try out new features quickly, lke a proof of concept.

It already proved that CPUs in games can be used in a much more efficient manner and that CPU bottlenecks can be reduced, it achieved better multithreading. And now openGL and DirectX are following that lead... I am not claiming that Microsoft started making DirectX12 as a response to Mantle, i'm sure they would have made it anyway. But it's obvious that many design choices are heavily influenced by Mantle in a positive way. It was an important step considering GPUs are advancing much faster than CPUs. So everyone benefits; consumers, Intel, Nvidia and AMD.

To quote Gaben himself- "it's not a zero sum game". :)

Link to comment
Share on other sites

Link to post
Share on other sites

Boring just like mantle and freesync and tressfx and other waste of R&D(maybe not), if its not open and used by intel/nvidia and other vendors it doesnt become a standard hence its useless

People shouldnt be buying APU's for computing in the first place,i know HSA can be done for dedicated cpu/gpu systems but its not like amd/intel/nvidia are going to ever cooperate in order to have cross-hardware compatibility.

Also you first have to go big in order to go small, i mean this thing should first happen to mainstream/highend desktop-server dedicated hardware,and then if the benefits of HSA are real put in mobile devices and other stuff,all AMD has been doing mantle and now HSA if for their APU's they still believe "The Future is Fusion(aka APU)" no its not,only for mobile SoC's it is.

Hence boring.

Link to comment
Share on other sites

Link to post
Share on other sites

There will never be a dedicated "experimental API".

Generally you want as few APIs as possible (however the opposite is happening), as you then can have the experimental features combined with already existing features.

If the used a different API, they cannot be 100% the feature will be compatible with the existing API.

What in reality happened is that everyone is making an low-level API (Even apple joined). This will become a major issue, until the weak APIs have been sorted away.

Have as few standard APIs as possible, as you can keep compatible with as much as possible.

Mantle is more or less dead, however AMD is trying to regain control by offering support to DX12 and OpenGL.

In the end OpenGL will win anyways, it is far superior to the other APIs. It is almost everything that is been asked for to this day.

It is not a high-level API, it is great for GPGPU, it works with almost all hardware and is portable to multiple OS's.

Link to comment
Share on other sites

Link to post
Share on other sites

Good. Now let's hope that devs are actually going to use it.

LTT's unofficial Windows activation expert.
 

Link to comment
Share on other sites

Link to post
Share on other sites

Also you first have to go big in order to go small, i mean this thing should first happen to mainstream/highend desktop-server dedicated hardware,and then if the benefits of HSA are real put in mobile devices and other stuff,all AMD has been doing mantle and now HSA if for their APU's they still believe "The Future is Fusion(aka APU)" no its not,only for mobile SoC's it is.

Hence boring.

The dedicated GPU markets is shrinking. The more the integrated GP advance, the less need there is for a dedicated GPU.

It is all about integration.

AMD, Intel and Nvidia all know this. This is one of the reason why Intel is not entering the dedicated GPU market.

Nvidia is trying to force itself into other markets, which really haven't been very successful, however they will keep trying.

This is not something that will happen within 10 years, however it will happen. In the future, there would not be a dedicated gaming GPU brand.

Link to comment
Share on other sites

Link to post
Share on other sites

impress

Current Build + Setup

AMD Ryzen 7 5700X | GIGABYTE B550 Aorus Pro v2 | CORSAIR Dominator Platinum 16gb 3600Mhz | GIGABYTE RTX 3070 AORUS MASTER OC 8 GB | NZXT H510 Elite | 2TB Seagate Barracuda 7200RPM | ADATA XPG GAMMIX S7 512GB M.2-2280 NVME | Corsair RM850 80+ Gold Modular PSU | NZXT Kraken X63 | Harman Kardon Soundstick 4 | Koorui 27E1Q

Link to comment
Share on other sites

Link to post
Share on other sites

The dedicated GPU markets is shrinking. The more the integrated GP advance, the less need there is for a dedicated GPU.

It is all about integration.

AMD, Intel and Nvidia all know this. This is one of the reason why Intel is not entering the dedicated GPU market.

Nvidia is trying to force itself into other markets, which really haven't been very successful, however they will keep trying.

This is not something that will happen within 10 years, however it will happen. In the future, there would not be a dedicated gaming GPU brand.

The future where cpu and gpu require same cooling solution sounds like a bad one.

Anyone who has a sister hates the fact that his sister isn't Kasugano Sora.
Anyone who does not have a sister hates the fact that Kasugano Sora isn't his sister.
I'm not insulting anyone; I'm just being condescending. There is a difference, you see...

Link to comment
Share on other sites

Link to post
Share on other sites

The future where cpu and gpu require same cooling solution sounds like a bad one.

I do believe we would have invented a better cooling solution at that point.
Link to comment
Share on other sites

Link to post
Share on other sites

i doubt they be pricks to ever do that as I'm sure that is illegal under anti competition and amd can see everything they do to the source code. Plus what was the other option for mantle, stay closed and not be able to be adopted.

 

I'm sure amd does get something out. Which is having their cpu sell more b/c the new api would lead to programs being less cpu bound meaning people would not need really strong cpus thus since amd is pretty strong in the lower cpu market more people will find amd cpu more attractive.

I don't think Khronos implementing Mantle-like features into OpenGL and OpenCL would be anti-competitive. I mean, the code was given to them to do whatever they want with it, and can non-profit groups that release free and open source products really be anti-competitive?

 

I think you are correct with the last point though. AMD will probably benefit from programs becoming less CPU bound. All the positive PR they have gotten is also good for them.

Link to comment
Share on other sites

Link to post
Share on other sites

Mantle is more or less dead, however AMD is trying to regain control by offering support to DX12 and OpenGL.

AMD cannot control DX12 or OpenGL. The fact that they are showing their code does not mean that Khronos and Microsoft will copy-paste chunks of it.

What they may do is take certain design principles and learnings... but it will be their own implementation.

 

 

I think you are correct with the last point though. AMD will probably benefit from programs becoming less CPU bound. All the positive PR they have gotten is also good for them.

They deserve it. Mantle has pushed the industry forward. Just like Nvidia's gsync. Despite the fact that neither of them will become industry standards; both have been good innovations which have spurred other things.

Link to comment
Share on other sites

Link to post
Share on other sites

The future where cpu and gpu require same cooling solution sounds like a bad one.

Not really. When you consider CPU cores can become less complex (and thus less hot) and GPU cores can take over some tasks, everything gets sped up while being EASIER to cool.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

AMD cannot control DX12 or OpenGL. The fact that they are showing their code does not mean that Khronos and Microsoft will copy-paste chunks of it.

What they may do is take certain design principles and learnings... but it will be their own implementation.

That is what I meant by "support".
Link to comment
Share on other sites

Link to post
Share on other sites

I really hope AMD manages to stay afloat with their innovation in HSA.  It's truly driving technology forward, IMHO.

QUOTE ME OR I PROBABLY WON'T SEE YOUR RESPONSE 

My Setup:

 

Desktop

Spoiler

CPU: Ryzen 9 3900X  CPU Cooler: Noctua NH-D15  Motherboard: Asus Prime X370-PRO  RAM: 32GB Corsair Vengeance LPX DDR4 @3200MHz  GPU: EVGA RTX 2080 FTW3 ULTRA (+50 core +400 memory)  Storage: 1050GB Crucial MX300, 1TB Crucial MX500  PSU: EVGA Supernova 750 P2  Chassis: NZXT Noctis 450 White/Blue OS: Windows 10 Professional  Displays: Asus MG279Q FreeSync OC, LG 27GL850-B

 

Main Laptop:

Spoiler

Laptop: Sager NP 8678-S  CPU: Intel Core i7 6820HK @ 2.7GHz  RAM: 32GB DDR4 @ 2133MHz  GPU: GTX 980m 8GB  Storage: 250GB Samsung 850 EVO M.2 + 1TB Samsung 850 Pro + 1TB 7200RPM HGST HDD  OS: Windows 10 Pro  Chassis: Clevo P670RG  Audio: HyperX Cloud II Gunmetal, Audio Technica ATH-M50s, JBL Creature II

 

Thinkpad T420:

Spoiler

CPU: i5 2520M  RAM: 8GB DDR3  Storage: 275GB Crucial MX30

 

Link to comment
Share on other sites

Link to post
Share on other sites

You do realize how close MS works with Nvidia and Intel on GPU solutions, right? HSA will be in Nvidias warchest soon enough. Guess who works with Kronos? Nvidia, Intel and pretty much every heavy hitter in the industry. Kronos will let this loose soon enough and EVERY SINGLE MEMBER will get a great look at it. 

AMD isn't getting a jump on anyone. Intel has been moving towards this for ages, AMD can barely make TDP competitive chips anymore because they blew so much on overpaying for ATI. AMD made all the wrong decisions when they held the lead, and Intel isn't going to give it back to them. 

MS is as close to NVIDIA and Intel as AMD. The same goes for Khronos.

But let me give you this perspective: instead of seeing this as the final dumb move from AMD that nailed Mantle, why not think of it like AMD managed to widespread Mantle adoption and Mantle based APIs? Khronos is yet to be seen how much of a chunk of Mantle it takes and (in my opinion, I'm not going to start this crap again) Direct3D have chuncks of Mantle from what was presented to developers and claimed by AMD (bold claims I might say). Both Intel and NVIDIA asked to access it, yet they were denied for AMD own reasons... most likely it was because it is still in beta, and maybe AMD didn't want to give the access directly to IHV but through a organization such as Khronos (I don't know the legal benefits of it, but it does seem to be the right thing).

So if AMD brings a new hardware feature to the market and if some developers want to try it out there isn't going to be the issue "Oh but I have to optimize the engine all over again for Mantle API... so I need money/sacrifice time/etc" ... they have a common base and that is Mantle - so they'll do it with the minimum ammount of effort when it comes to implementation. That's one of the reasons pointed out in fact for the traction Mantle is getting - many developers are getting their engines/games ready for the upcoming APIs by using Mantle.

AMD made a tool that will make them be ahead of the curve, and they spread it the right way IMO. If they kept it closed it wouldn't have 75+ developers supporting it.

Now are they going to take advantage of this? That is yet to be seen - if they don't take advantage, then you can say they are dumb.

Link to comment
Share on other sites

Link to post
Share on other sites

AMD needs this maturity to happen to sell its APUs. Now whats the best way to mature this tech? Keep it gated in? Or make it standard? nVidia has nothing to do with HSA. They will be locked out of this market, while AMD gets the jump on intel, just like they did with the 64 bit standard. That's AMDs hail mary. If it doesnt work, they are screwed. 

What do you mean AMD gets a jump on Intel? Intel has already caught up to Kaveri with Skylake, putting them exactly 1 HSA generation behind Carrizo at the time both platforms launch mid 2015. Intel wins, as usual.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You do realize how close MS works with Nvidia and Intel on GPU solutions, right? HSA will be in Nvidias warchest soon enough. Guess who works with Kronos? Nvidia, Intel and pretty much every heavy hitter in the industry. Kronos will let this loose soon enough and EVERY SINGLE MEMBER will get a great look at it. 

AMD isn't getting a jump on anyone. Intel has been moving towards this for ages, AMD can barely make TDP competitive chips anymore because they blew so much on overpaying for ATI. AMD made all the wrong decisions when they held the lead, and Intel isn't going to give it back to them. 

 

nvidia doesnt have anything to gain from HSA. They want to milk CUDA for everything its worth, for as long as possible, since they invested so much into it. You think nvidia is going to jump at the chance to drop everything for HSA? What about CUDA? What do they tell their customers? Eventually they may join the HSA foundation but its the least desirable outcome for them because it means relying on intel or amd components to realize heterogeneity (more variables outside of their direct control). HSA also relies less on raw horsepower of dedicated GPUs and a more balanced approach which is not in nvidia's interests especially since HSA competes with certain nvidia IP like dedicated PhysX cards. If anyone wants to harness the power of GPU data offloads its going to be intel. Not only because their iGPU solutions are absolute shyte compared to AMD's APU offerings, but also because it fits into the larger industry narritive of eventual SoC migration which is where everything is eventually heading.  

 

Again.....paying too much for ATI...I thought you looked familiar smh

AMD FX-8350 @ 4.7Ghz when gaming | MSI 990FXA-GD80 v2 | Swiftech H220 | Sapphire Radeon HD 7950  +  XFX Radeon 7950 | 8 Gigs of Crucial Ballistix Tracers | 140 GB Raptor X | 1 TB WD Blue | 250 GB Samsung Pro SSD | 120 GB Samsung SSD | 750 Watt Antec HCG PSU | Corsair C70 Mil Green

Link to comment
Share on other sites

Link to post
Share on other sites

nvidia doesnt have anything to gain from HSA. They want to milk CUDA for everything its worth, for as long as possible, since they invested so much into it. You think nvidia is going to jump at the chance to drop everything for HSA? What about CUDA? What do they tell their customers? Eventually they may join the HSA foundation but its the least desirable outcome for them because it means relying on intel or amd components to realize heterogeneity (more variables outside of their direct control). If anyone wants to harness the power of GPU data offloads its going to be intel. Not only because their iGPU solutions are absolute shyte compared to AMD's APU offerings, but also because it fits into the larger industry narritive of eventual SoC migration which is where everything is eventually heading.

Again.....paying too much for ATI...I thought you looked familiar smh

AMD did overpay for ATI, especially when they first wanted to merge with Nvidia, but that aside, this is why the CUDA 6 standard includes unified memory, and that will be made possible via onboard ARM or x86 cores in their GPUs. Nvidia will be the first of the great triad to die off, but in a way that's sad because it was the only GPU maker pushing power efficiency while still staying on par or ahead of AMD.

Imagine if we had CUDA-based APUs :D It would become the new industry standard which has always been easier to program in than OpenCL.

Also, Intel's iGPUs are more powerful than AMD's for computation Iris Pro 5200 is as strong as Kaveri at GPGPU compute thanks to Intel's industry-leading floating point units. Iris 6200 brings a 2 Teraflop iGPU to the field, way above Carrizo's 1.2 and just at half the performance of a top-end Tesla or FirePro compute card (~4.14 TFlops). With unified memory on Skylake Intel can drop the eDRAM except from Xeons which would benefit from the extra bandwidth in server environments, making its chips cost-competitive with Carrizo and more powerful.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

AMD did overpay for ATI, especially when they first wanted to merge with Nvidia, but that aside, this is why the CUDA 6 standard includes unified memory, and that will be made possible via onboard ARM or x86 cores in their GPUs. Nvidia will be the first of the great triad to die off, but in a way that's sad because it was the only GPU maker pushing power efficiency while still staying on par or ahead of AMD.

Imagine if we had CUDA-based APUs :D It would become the new industry standard which has always been easier to program in than OpenCL.

We kind of do have CUDA based APUs. Tegra K1.
Link to comment
Share on other sites

Link to post
Share on other sites

AMD did overpay for ATI, especially when they first wanted to merge with Nvidia, but that aside, this is why the CUDA 6 standard includes unified memory, and that will be made possible via onboard ARM or x86 cores in their GPUs. Nvidia will be the first of the great triad to die off, but in a way that's sad because it was the only GPU maker pushing power efficiency while still staying on par or ahead of AMD.

Imagine if we had CUDA-based APUs :D It would become the new industry standard which has always been easier to program in than OpenCL.

From what is said around the industry NVIDIA had plans for x86 not so long ago (T-K1 if I recall correctly) ... but the "all mighty" Jen-Hsun Huang and his big mouth blew it. That man is going to burn NVIDIA to the ground... their fortune is the great engineers they have, but that's not enough when you have that kind of CEO.

Link to comment
Share on other sites

Link to post
Share on other sites

We kind of do have CUDA based APUs. Tegra K1.

Yeah, but those are ARM chips. What if we had them backed by AMD's x86 cores? AMD would be able to compete with Intel in TDP where Nvidia has had the advantage for years and with Maxwell gains an even bigger lead over GCN.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

From what is said around the industry NVIDIA had plans for x86 not so long ago (T-K1 if I recall correctly) ... but the "all mighty" Jen-Hsun Huang and his big mouth blew it. That man is going to burn NVIDIA to the ground... their fortune is the great engineers they have, but that's not enough when you have that kind of CEO.

It's arguable who's more incompetent: Ruiz of AMD or Huang of Nvidia. Ruiz could have merged with Nvidia and gained the war chest needed to go toe to toe with Intel, but his pride wouldn't let Huang be CEO. Instead he went to 2nd tier ATI and paid shareholder sticker price for a company worth about 2/3 of that. Both have made unbelievably stupid moves.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It's arguable who's more incompetent: Ruiz of AMD or Huang of Nvidia. Ruiz could have merged with Nvidia and gained the war chest needed to go toe to toe with Intel, but his pride wouldn't let Huang be CEO. Instead he went to 2nd tier ATI and paid shareholder sticker price for a company worth about 2/3 of that. Both have made unbelievably stupid moves.

I said Huang blew NVIDIA chance to be near x86. I didn't say Ruiz is better or worst then Huang - I agree with you that both made really stupid moves.

Link to comment
Share on other sites

Link to post
Share on other sites

I said Huang blew NVIDIA chance to be near x86. I didn't say Ruiz is better or worst then Huang - I agree with you that both made really stupid moves.

Oh well, the decisions made by both AMD and Nvidia have doomed them to the dustbin, leaving only Intel in 40 years unless ARM can become truly ubiquitous.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

From what is said around the industry NVIDIA had plans for x86 not so long ago (T-K1 if I recall correctly) ... but the "all mighty" Jen-Hsun Huang and his big mouth blew it. That man is going to burn NVIDIA to the ground... their fortune is the great engineers they have, but that's not enough when you have that kind of CEO.

This is correct, however Intel refused to give them a license.

The Tegra K1 was meant to be x86. They simply changed the decoders.

However its performance will still be behind other x86 processors in most task, due to it been an in-order architecture that stimulates out-of-order execution.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×