Jump to content

CUDA reverse engineered to work on Intel and AMD.

zacRupnow
Just now, Majestic said:

lol, i'd not rule out intel that easily. They've been gaining quite the momentum with their iGPU's. I'm suspecting those Iris Pro 580's will be quite competitive, if placed in affordable CPU packages.

They wont be in any "affordable" package anytime soon. Intel has no interest in competing with AMD on the APU front. They know that they cannot beat AMD at price to performance (their eDRAM "crystal lake" and the Iris Pro graphics themselves actually cost a bit to make).

 

I would be pleseantly shocked to see a Skylake + GT4e product below $225 USD.

 

To intel, Iris pro is a product purely aimed towards those who NEED the best graphics possible in a SoC solution.

 

That being said... GT3e (14nm Broadwell Iris Pro) was about 20% faster then Kaveri (28nm Custom GCN 1.1 R7 Graphics).... But it cost litterally over twice as much.... So i wonder how intel will behave, or if they will release a GT4e chip at al considering that AMD can now have 14nm APUs with FinFet technologoy, allowing them to cram A LOT more power whilst retaining reasonable TDP levels

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Prysin said:

I would be pleseantly shocked to see a Skylake + GT4e product below $225 USD.

Why $225? 

 

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i5-6500 3.2GHz Quad-Core Processor  ($189.99 @ SuperBiiz) 
Video Card: Asus GeForce GTX 750 Ti 2GB STRIX Video Card  ($122.98 @ Newegg) 
Total: $312.97
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-03-12 10:19 EST-0500

 

This is how much it would cost to get a 4C/4T skylake CPU with a GPU as fast as the GT4e. Even if it's priced at $400, the fact that it won't require a case that can house a dGPU opens up quite a lot of possibilities and might be worth the money.

 

Ofcourse, it's not proven to be on par with the 750TI, i'm just speculating.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Majestic said:

Why $225? 

 

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i5-6500 3.2GHz Quad-Core Processor  ($189.99 @ SuperBiiz) 
Video Card: Asus GeForce GTX 750 Ti 2GB STRIX Video Card  ($122.98 @ Newegg) 
Total: $312.97
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-03-12 10:19 EST-0500

 

This is how much it would cost to get a 4C/4T skylake CPU with a GPU as fast as the GT4e. Even if it's priced at $400, the fact that it won't require a case that can house a dGPU opens up quite a lot of possibilities and might be worth the money.

 

Ofcourse, it's not proven to be on par with the 750TI, i'm just speculating.

honestly dude, look at the last time intel released a Iris Pro solution.... look at the price history in particular...

The reason i said 225 USD is because if it costs 225 USD, it would be possible to get within that 400-450 USD "budget" sweetspot whilst offering good performance. However above 225 USD, it would cost too much and be outperformed in both price to performance and raw performance by a CPU + dGPU setup. Even if said CPU is a Athlon 860k, if you grab a GTX 950, you would have greater performance at about the same price. You may even get a Boot SSD into said dGPU setup.

http://pcpartpicker.com/part/intel-cpu-bx80658i55675c

http://pcpartpicker.com/part/intel-cpu-bx80658i75775c


 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, AlwaysFSX said:

Yeah CUDA requires a license for a reason, anyone using this in their product is immediately going to eat shit in court. AMD and Intel could get licenses for CUDA, but refuse not to. Their problem, not Nvidia's.

Source?

3 hours ago, AlexGoesHigh said:

Intel has a CUDA license but i think they only use it for Xeon Phi, not sure if you can get official CUDA support into a iGPU, and AMD has actively refused to get a CUDA license. From what i read on the forum whenever CUDA is part of the discussion is that there was a point where Nvidia offered them the license for no cost or ultra cheap or something like that, but AMD for whatever reason always refused, though recently they announced a way that CUDA code can work on their GPU's but is not exactly official CUDA is more like a converter of sorts, that turns it into C++ or another non proprietary language but either way CUDA in some form should work on AMD at some point.

I keep hearing this all over the forums, over and over again. Yet I've never seen a source for this.

 

Can either of you post a source that confirms that NVIDIA offered AMD a CUDA license, and AMD refused it?

2 hours ago, Megahurt said:

If I'm not mistaken, AMD got a CUDA license not too long ago, and I think the reason they didn't have support for it is that it would be patent infringement rather than them not being able to figure out how it works, and that applies to the rest of the proprietary nvidia stuff.

This is incorrect. AMD did not get a CUDA license. AMD created a translation software that took raw (uncompiled) CUDA coda, and turned it into OpenCL code, which would still need to be compiled. This does not require a license, since the owners of said code (programmers) would be the ones using the software, and because the code in question is uncompiled.

1 hour ago, DXMember said:

kewl I guess...

AMD announced CUDA support to come a while ago

but can't we just use ACE?

See above. AMD announced a program that would translate uncompiled code from one language to another. This is NOT CUDA support.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, dalekphalm said:

Source?

I keep hearing this all over the forums, over and over again. Yet I've never seen a source for this.

 

Can either of you post a source that confirms that NVIDIA offered AMD a CUDA license, and AMD refused it?

This is incorrect. AMD did not get a CUDA license. AMD created a translation software that took raw (uncompiled) CUDA coda, and turned it into OpenCL code, which would still need to be compiled. This does not require a license, since the owners of said code (programmers) would be the ones using the software, and because the code in question is uncompiled.

See above. AMD announced a program that would translate uncompiled code from one language to another. This is NOT CUDA support.

yeah, i too keep hearing that AMD was offered CUDA licenses. Yet no real evidence of said offerings have ever been posted. Ive seen links to rumors about it, but nowhere have i heard that there was serious negotiations to allow such a thing.

 

Also, Nvidia likes to "lock things down" to their own platform. It doesnt seem like something Nvidia would do to be honest.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Majestic said:

Why $225? 

 

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i5-6500 3.2GHz Quad-Core Processor  ($189.99 @ SuperBiiz) 
Video Card: Asus GeForce GTX 750 Ti 2GB STRIX Video Card  ($122.98 @ Newegg) 
Total: $312.97
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-03-12 10:19 EST-0500

 

This is how much it would cost to get a 4C/4T skylake CPU with a GPU as fast as the GT4e. Even if it's priced at $400, the fact that it won't require a case that can house a dGPU opens up quite a lot of possibilities and might be worth the money.

 

Ofcourse, it's not proven to be on par with the 750TI, i'm just speculating.

If my math adds up, the Skylake GT4e should be between 750 and 750 Ti levels of performance, maybe slightly faster than a reference 750 Ti (not counting what they can be overclocked to). Patrick's math is claiming it will be just as fast as a GTX 950 or slightly faster, which is going to be a huge difference to my math, given the 950's already 20-30% lead over the 750 Ti (depending on which flavor of card). 

 

8 minutes ago, Prysin said:

honestly dude, look at the last time intel released a Iris Pro solution.... look at the price history in particular...

The reason i said 225 USD is because if it costs 225 USD, it would be possible to get within that 400-450 USD "budget" sweetspot whilst offering good performance. However above 225 USD, it would cost too much and be outperformed in both price to performance and raw performance by a CPU + dGPU setup. Even if said CPU is a Athlon 860k, if you grab a GTX 950, you would have greater performance at about the same price. You may even get a Boot SSD into said dGPU setup.

http://pcpartpicker.com/part/intel-cpu-bx80658i55675c

http://pcpartpicker.com/part/intel-cpu-bx80658i75775c


 

That price was for a couple of reasons. Bad yields was the first official rumor, but it was mostly supply and demand. They did not put any effort into trying to sell Broadwell to desktop consumers. They didn't even come out to brag about its release. They just silently rolled it out, and it ended up costing way over their MSRP listing on their ark page. For the pricing to work, we need to see the SKU's launch with a ton of supply, but it seems Intel does not have much faith in these iGPU's for the desktop consumer crowd. Hence their lackluster supply/hype. 

 

If you look at Iris Pro in laptops, its a different story entirely, and even cheaper than some of the laptops with dGPUs while matching their performance. That is because Iris Pro is a better idea in mobile solutions (where space and power consumption means a ton). The only people that I see caring about Iris Pro in desktop scenarios are ITX enthusiasts and extreme budget gamers. Sadly, budget gamers are not going to be buying i7's just to use an iGPU, and the i5's, even with lackluster iGPU's, cost more than an i3 + dGPU. It's a hard sell to make to budget gamers, and that I am pretty sure Intel knows this. If they would just let Iris Pro land on the i3's, I am willing to bet we would see a new Price:Performance CPU king, and it would sell like hotcakes. Could price it between a normal i3 and i5, and give people an amazing budget option. That's just my take on it.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, MageTank said:

If my math adds up, the Skylake GT4e should be between 750 and 750 Ti levels of performance, maybe slightly faster than a reference 750 Ti (not counting what they can be overclocked to). Patrick's math is claiming it will be just as fast as a GTX 950 or slightly faster, which is going to be a huge difference to my math, given the 950's already 20-30% lead over the 750 Ti (depending on which flavor of card). 

 

That price was for a couple of reasons. Bad yields was the first official rumor, but it was mostly supply and demand. They did not put any effort into trying to sell Broadwell to desktop consumers. They didn't even come out to brag about its release. They just silently rolled it out, and it ended up costing way over their MSRP listing on their ark page. For the pricing to work, we need to see the SKU's launch with a ton of supply, but it seems Intel does not have much faith in these iGPU's for the desktop consumer crowd. Hence their lackluster supply/hype. 

 

If you look at Iris Pro in laptops, its a different story entirely, and even cheaper than some of the laptops with dGPUs while matching their performance. That is because Iris Pro is a better idea in mobile solutions (where space and power consumption means a ton). The only people that I see caring about Iris Pro in desktop scenarios are ITX enthusiasts and extreme budget gamers. Sadly, budget gamers are not going to be buying i7's just to use an iGPU, and the i5's, even with lackluster iGPU's, cost more than an i3 + dGPU. It's a hard sell to make to budget gamers, and that I am pretty sure Intel knows this. If they would just let Iris Pro land on the i3's, I am willing to bet we would see a new Price:Performance CPU king, and it would sell like hotcakes. Could price it between a normal i3 and i5, and give people an amazing budget option. That's just my take on it.

an i3 with Iris Pro would be great in any budget setting given that DX12 can allow you to CF/SLI any dGPU with the iGPU. Which means as a "gonna use iGPU until i can afford dGPU" product, it makes sense.

 

However, i doubt we will see such a product, if ever, for the same reason as we wont see a i3 K SKU....

 

And i still call bullshit on Patricks 950ish performance. 750Ti, sure.... current gen Broadwell is around 10-15% slower then a 750. So expecting 30% more performance is a "best case" scenario, but one i am willing to think is feasible. A +50% performance jump from Gen3 to Gen4 intel HD graphics, i do not see that happening anytime soon. Intel may have the money, but in terms of experience and knowledge about graphics, i seriously doubt their ability to push that much performance.

 

AMD, i could see it happen given their 5 years of being able to increase performance without a node shrink, which in an of itself is a admirable feat. But i would not expect even AMDs engineers to be able to increase raw performance by more then 35-40%, even if the stars align and they have a node shrink.....

 

Either way, both AMD and Intel is kneecapped by memory bandwidth. Hyper Threading and eDRAM does help alleviate memory bandwidth issues, but it doesnt remove it.

Even the fastest DDR4 (4299 GSKILL) in dual channel wont be able to come close to a 750Ti or 950 in bandwidth.... Now AMD has been teasing quad channel memory support, which would be interesting, but i heavily doubt they would ever put such a beastly IMC and feature set on any budget oriented APU setup.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, dalekphalm said:

Source?

I keep hearing this all over the forums, over and over again. Yet I've never seen a source for this.

 

Can either of you post a source that confirms that NVIDIA offered AMD a CUDA license, and AMD refused it?

I believe these 2 articles from 2008 is where people get the assumption that AMD 'refused' CUDA:

 

http://www.techpowerup.com/64787/radeon-physx-creator-nvidia-offered-to-help-us-expected-more-from-amd.html

http://www.extremetech.com/computing/82264-why-wont-ati-support-cuda-and-physx

Link to comment
Share on other sites

Link to post
Share on other sites

05d759ca141851083047239205_700wa_0.gif

 

For those who don't know, CUDA is about using C++ programming language with the idea of executing complex mathematical equations on then GPU, instead of the CPU. So now you bring back to the CPU? Congrats?! Now it is shit performance again, as if you (as a dev), done it on the CPU in the first place.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, dalekphalm said:

Source?

https://developer.nvidia.com/cuda-toolkit

http://docs.nvidia.com/cuda/eula/#nvidia-cuda-toolkit-license-agreement

 

CUDA uses C/C++, so it is not architecture specific. C/C++ can run on AMD GPUs.

 

:| I see nothing that prevents one company from being allowed a license to software from another company.

.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, GoodBytes said:

05d759ca141851083047239205_700wa_0.gif

 

For those who don't know, CUDA is about using C programming language with the idea of executing complex mathematical equations on the GPU, instead of the CPU. So now you bring back to the CPU? Congrats?!

i think they mean that intels HD graphics can use it. Not the CPU itself....

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Prysin said:

i think they mean that intels HD graphics can use it. Not the CPU itself....

What is the point? Intel GPU has very limited parallelism, which is the whole point of picking CUDA. As for AMD and Intel, they have OpenCL, which both support fully, and have no interest in CUDA like solution, because both don't focus on making GPU for research applications.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Pintend said:

Link 1: AMD refused sponsoring the devs who broke CUDA code because they DID NOT WANT A LEGAL BATTLE. NVIDIA SAID THEY WOULD BE INTERESTED HELPING THE DEVELOPERS WHO BROKE THE CUDA CODE. Nowhere does it say AMD was offered a license formally.

This was simply a third party trying to modify CUDA drivers illegally, and they tried to get FREE SAMPLES from AMD/ATI.... Free samples = sponsoring. Sponsoring an illegal action WILL LEAD TO LAWSUITS.

 

LINK #1 IS BULLSHIT. AMD SIMPLY REFUSED TO DO SOMETHING THAT WOULD GET THEM SUED!

 

Link 2:

Quote

Nvidia, he says, has not shown that they would be an open and truly collaborative partner when it comes to PhsyX. The same goes for CUDA, for that matter.

Though he admits and agrees that they haven’t called up Nvidia on the phone to talk about supporting PhysX and CUDA, he says there are lots of opportunities for the companies to interact in this industry and Nvidia hasn’t exactly been very welcoming

meaning, Nvidia would love for AMD to make a CUDA driver. But they cannot release it in public. Why?

Because CUDA is owned by NVIDIA, It is Nvidias sole right to monetize it. If AMD were to make a CUDA driver they would be unable to offer it to the public without getting a License from Nvidia that allows them to monetize the CUDA driver.

 

Of course Nvidia would be thrilled about milking AMD/ATI for money. Nvidia wants to milk ANYONE for money. However at no point does it say that Nvidia was offering them a license that would allow them to monetize CUDA support. Which is the whole issues. CUDA is open for everyone to write code for. However software =/= a driver. Nvidia allows anyone to write software for CUDA, however The modification of CUDA drivers, owned and made by Nvidia corp is forbidden in the EULA/ToS. You can create your own custom driver, however you may not redistribute or sell said driver for money or other valuable objects or services.

 

LINK #2 IS BULLSHIT. IT SIMPLY SAYS THAT NVIDIA WOULD LOVE TO TALK ABOUT SUPPORT FOR AMD. IN NO WAY DOES IT SAY AMD WOULD GET IT FOR FREE, OR EVEN AT ALL. TALKING DOES NOT MEAN YOU CONDONE OR ALLOW SOMETHING. IT SIMPLY MEANS TALKING.

HOWEVER IN THE BUSINESS WORLD, THERE IS ONLY ONE LANGUAGE: MONEY.

Link to comment
Share on other sites

Link to post
Share on other sites

What i take from this : AMD needs to come up with their own solution to execute C/C++ code on their gpus , in a similar fashion to CUDA.

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, AlwaysFSX said:

https://developer.nvidia.com/cuda-toolkit

http://docs.nvidia.com/cuda/eula/#nvidia-cuda-toolkit-license-agreement

 

CUDA uses C/C++, so it is not architecture specific. C/C++ can run on AMD GPUs.

 

:| I see nothing that prevents one company from being allowed a license to software from another company.

See @Prysin's response below. I read through the links, and I agree with his conclusions. It's "hinted" at, but I think those statements are nothing more than PR speak from NVIDIA. No solid proof that AMD refused a license.

14 minutes ago, Prysin said:

Link 1: AMD refused sponsoring the devs who broke CUDA code because they DID NOT WANT A LEGAL BATTLE. NVIDIA SAID THEY WOULD BE INTERESTED HELPING THE DEVELOPERS WHO BROKE THE CUDA CODE. Nowhere does it say AMD was offered a license formally.

This was simply a third party trying to modify CUDA drivers illegally, and they tried to get FREE SAMPLES from AMD/ATI.... Free samples = sponsoring. Sponsoring an illegal action WILL LEAD TO LAWSUITS.

 

LINK #1 IS BULLSHIT. AMD SIMPLY REFUSED TO DO SOMETHING THAT WOULD GET THEM SUED!

 

Link 2:

meaning, Nvidia would love for AMD to make a CUDA driver. But they cannot release it in public. Why?

Because CUDA is owned by NVIDIA, It is Nvidias sole right to monetize it. If AMD were to make a CUDA driver they would be unable to offer it to the public without getting a License from Nvidia that allows them to monetize the CUDA driver.

 

Of course Nvidia would be thrilled about milking AMD/ATI for money. Nvidia wants to milk ANYONE for money. However at no point does it say that Nvidia was offering them a license that would allow them to monetize CUDA support. Which is the whole issues. CUDA is open for everyone to write code for. However software =/= a driver. Nvidia allows anyone to write software for CUDA, however The modification of CUDA drivers, owned and made by Nvidia corp is forbidden in the EULA/ToS. You can create your own custom driver, however you may not redistribute or sell said driver for money or other valuable objects or services.

 

LINK #2 IS BULLSHIT. IT SIMPLY SAYS THAT NVIDIA WOULD LOVE TO TALK ABOUT SUPPORT FOR AMD. IN NO WAY DOES IT SAY AMD WOULD GET IT FOR FREE, OR EVEN AT ALL. TALKING DOES NOT MEAN YOU CONDONE OR ALLOW SOMETHING. IT SIMPLY MEANS TALKING.

HOWEVER IN THE BUSINESS WORLD, THERE IS ONLY ONE LANGUAGE: MONEY.

 

6 minutes ago, Coaxialgamer said:

What i take from this : AMD needs to come up with their own solution to execute C/C++ code on their gpus , in a similar fashion to CUDA.

They do: It's called OpenCL. Just, game devs typically don't use OpenCL for GPU Compute - they typically use GamesWorks because it's easy to implement, and comes with NVIDIA enginners.


As far as I'm aware, OpenCL also uses C/C++.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, dalekphalm said:

 

 

They do: It's called OpenCL. Just, game devs typically don't use OpenCL for GPU Compute - they typically use GamesWorks because it's easy to implement, and comes with NVIDIA enginners.


As far as I'm aware, OpenCL also uses C/C++.

OpenCL is mainly C++

 

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Prysin said:

an i3 with Iris Pro would be great in any budget setting given that DX12 can allow you to CF/SLI any dGPU with the iGPU. Which means as a "gonna use iGPU until i can afford dGPU" product, it makes sense.

 

However, i doubt we will see such a product, if ever, for the same reason as we wont see a i3 K SKU....

 

And i still call bullshit on Patricks 950ish performance. 750Ti, sure.... current gen Broadwell is around 10-15% slower then a 750. So expecting 30% more performance is a "best case" scenario, but one i am willing to think is feasible. A +50% performance jump from Gen3 to Gen4 intel HD graphics, i do not see that happening anytime soon. Intel may have the money, but in terms of experience and knowledge about graphics, i seriously doubt their ability to push that much performance.

 

AMD, i could see it happen given their 5 years of being able to increase performance without a node shrink, which in an of itself is a admirable feat. But i would not expect even AMDs engineers to be able to increase raw performance by more then 35-40%, even if the stars align and they have a node shrink.....

 

Either way, both AMD and Intel is kneecapped by memory bandwidth. Hyper Threading and eDRAM does help alleviate memory bandwidth issues, but it doesnt remove it.

Even the fastest DDR4 (4299 GSKILL) in dual channel wont be able to come close to a 750Ti or 950 in bandwidth.... Now AMD has been teasing quad channel memory support, which would be interesting, but i heavily doubt they would ever put such a beastly IMC and feature set on any budget oriented APU setup.

I thought AMD mentioned something about HBM APU's, didn't they? If that is the case, their memory bandwidth would most definitely be enough, and no longer the bottleneck. Intel, with their close ties to micron, might be able to implement something similar, but I doubt they will want to. Again, they seem to not care that much about Iris Pro in the general desktop market. However, 720p netbooks with Iris Pro graphics, that is always something that I've wanted to see.

 

In the desktop SoC race, I am certain AMD will come out ahead, simply because they actually try to put effort into their APU's. I think desktop Broadwell was just to show people that Intel's iGPU solution works, and that they should take it seriously on mobile solutions. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Wait a minute, didn't nVidia just announce CUDA could now be licensed?

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, MageTank said:

I thought AMD mentioned something about HBM APU's, didn't they? If that is the case, their memory bandwidth would most definitely be enough, and no longer the bottleneck. Intel, with their close ties to micron, might be able to implement something similar, but I doubt they will want to. Again, they seem to not care that much about Iris Pro in the general desktop market. However, 720p netbooks with Iris Pro graphics, that is always something that I've wanted to see.

 

In the desktop SoC race, I am certain AMD will come out ahead, simply because they actually try to put effort into their APU's. I think desktop Broadwell was just to show people that Intel's iGPU solution works, and that they should take it seriously on mobile solutions. 

HBM wont work for APUs unless it is purely for the iGPU.

The access latency of GDDR/HBM and other "GPU" memory types are far too high for CPUs... Remember that AMD is banking on HSA compliant APUs... meaning the CPU and iGPU and ultimately also the dGPU can access and use the same resources.

The issue with this is how GPUs work. They don't care much about access time and latency, rather they care about bandwidth. CPUs on the other hand is the complete opposite, they care almost exclusively about latency and access time, whilst bandwidth is less of an issue.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Prysin said:

HBM wont work for APUs unless it is purely for the iGPU.

The access latency of GDDR/HBM and other "GPU" memory types are far too high for CPUs... Remember that AMD is banking on HSA compliant APUs... meaning the CPU and iGPU and ultimately also the dGPU can access and use the same resources.

The issue with this is how GPUs work. They don't care much about access time and latency, rather they care about bandwidth. CPUs on the other hand is the complete opposite, they care almost exclusively about latency and access time, whilst bandwidth is less of an issue.

My personal tests on memory bandwidth show that SLI/Crossfire care deeply about memory latency, as far as dips in minimum framerates go (will be posting a guide on that eventually once my work schedule gets less hectic) but I see your point. I just remember them mentioning it before, and was wondering if they were ever going to attempt it. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Sauron said:

Wait a minute, didn't nVidia just announce CUDA could now be licensed?

Source?

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, dalekphalm said:

See @Prysin's response below. I read through the links, and I agree with his conclusions. It's "hinted" at, but I think those statements are nothing more than PR speak from NVIDIA. No solid proof that AMD refused a license.

The fact that AMD doesn't have one means they don't want it otherwise they'd have gotten licensed by now. AMD refuses to get a license, they can totally afford it if they wanted to.

.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, zacRupnow said:

I don't know why the images are only showing as links.

Are you using the "Insert Other Media" option at the bottom right of the edit box?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Prysin said:

honestly dude, look at the last time intel released a Iris Pro solution.... look at the price history in particular...

The reason i said 225 USD is because if it costs 225 USD, it would be possible to get within that 400-450 USD "budget" sweetspot whilst offering good performance.

But you're comparing it to a dGPU build. Which is multiple times the size. This could be stuffed into a NUC or shuttle PC. The fact it doesn't replace a niche, doesn't mean it won't create it's own.

 

And i'm not sure why a 350 dollar CPU couldn't fit into a 450 dollar build. It only requires a tiny PSU, a small inexpensive case and an ssd.

Some shuttle PC's come with a power supply, which would suffice if over 90-100W.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×