Jump to content

AMD's Claim, Nvidia's Rebutle, and Intel's Intelligence

Kuzma isn't the alpha and omega.

 

Not all floating point calculations can be calculated with parallelity of GPU (hundreds if not thousands of processors).

 

And until AMD actually shows me something that works I will not believe in anything Kuzma makes up about the 'APU is the future' crap. You can define it as you like but I don't see 7 billion transistors that can be found on Titan put onto CPU+GPU die.

*facepalm*

... The point is not to put a Titan-level GPU onto a CPU to handle the FP operations.

The point is to put enough GPU cores on a CPU to handle the FP operations without bogging down the CPU side 

The "future" he is talking about isn't where "You only need an APU". It's where "You have an APU because they are more powerful than a CPU on it's own. As well as a dedicated GPU." 

The purpose of having GPU cores on a CPU is to handle the FP Operations. This frees up the cores on the CPU for the Integer Operations. 

The point of HSA is to allow FP Operations to be passed between the APU and dedicated GPU for the purpose of efficiency as some calculations, based on scale, will be better done by the GPU. Which can then directly hand them off to the APU.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

There is a few flaws with the OP. For example console games won't be directly ported and they don't use mantle. That's what I interpreted from this.

Also G-Sync, like someone mentioned already, does NOT require a decent GPU since the whole idea of it is to get smooth picture even with LOW framerates! And also in no way, shape or form does it hinder performance! Come on man, get your facts straight! I don't want to hate on you, but that text does look a bit AMD biased to me :/

 

Technically, G-Sync hasn't been released yet so we don't actually know if it has any effect on performance (NVidia PR and independent testing are 2 very different things). Based on what they've said I'd expect it to have little to no impact, but we don't know that for sure until we've actually seen G-Sync tested.

Link to comment
Share on other sites

Link to post
Share on other sites

*facepalm*

... The point is not to put a Titan-level GPU onto a CPU to handle the FP operations.

The point is to put enough GPU cores on a CPU to handle the FP operations without bogging down the CPU side 

The "future" he is talking about isn't where "You only need an APU". It's where "You have an APU because they are more powerful than a CPU on it's own. As well as a dedicated GPU." 

The purpose of having GPU cores on a CPU is to handle the FP Operations. This frees up the cores on the CPU for the Integer Operations. 

The point of HSA is to allow FP Operations to be passed between the APU and dedicated GPU for the purpose of efficiency as some calculations, based on scale, will be better done by the GPU. Which can then directly hand them off to the APU.

 

But then you have some floating point operations which CANNOT be offloaded to GPU (as in they wont be as efficient on the GPU) at which point making a CPU die with HALF the FPUs (2 ALU/1FPU bulldozer modules) is a TERRIBLE idea.

][ CPU: Phenom II x6 1045t @3,7GHz ][ GPU: GTX 660 2GB ][ Motherboard: Gigabyte GA-MA770T-UD3P ][ RAM: 8GB @1450Mhz CL9 DDR3 ][ PSU: Chieftec 500AB A ][ Case: SilentiumPC Regnum L50 ][ CPU Cooler: CoolerMaster Hyper 212 Evo & Arctic MX4 ][

Link to comment
Share on other sites

Link to post
Share on other sites

But then you have some floating point operations which CANNOT be offloaded to GPU (as in they wont be as efficient on the GPU) at which point making a CPU die with HALF the FPUs (2 ALU/1FPU bulldozer modules) is a TERRIBLE idea.

... I don't understand why you use capitals so much. 

Yes, of course there are some that should not be offloaded as there is no reason for it, but what I don't think you understand is that area on the CPU is already taken up by integrated GPUs. The 3570k has the Intel HD 4000 in it. 

Effectively, what an APU is, is simply a CPU with a beefier GPU side. I really don't see your point in that statement. These things already exist and they are already pretty powerful for what they can do. You can take the space FP Units use and replace them with an integrated GPU (since they aren't the same thing, AFAIK). 

Nothing has changed.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Kuzma isn't the alpha and omega.

 

Not all floating point calculations can be calculated with parallelity of GPU (hundreds if not thousands of processors).

 

And until AMD actually shows me something that works I will not believe in anything Kuzma makes up about the 'APU is the future' crap. You can define it as you like but I don't see 7 billion transistors that can be found on Titan put onto CPU+GPU die.

 

I smell a fanboy.

Haswell is also an apu.

Kaveri comes out Q1 2014 and it's said to have 30% better ipc, and also have more cores than amd's apu's have currently. 

 

Earlier today linus said that apu's were the future. Scroll down a bit to find his post.

I'll quote him as well.

APU is the future. Period. OpenCL adoption will drive this.

 

Both AMD and Intel have made that abundantly clear in the last 2-3 years.

 

Remember that AMD defines "APU" as any CPU that has a graphics core on it. When you consider that, most of Intel's CPUs are actually "APUs" as well.

 

 

Get your head out of your butt.

My Rig: AMD FX-8350 @ 4.5 Ghz, Corsair H100i, Gigabyte gtx 770 4gb, 8 gb Patriot Viper 2133 mhz, Corsair C70 (Black), EVGA Supernova 750g Modular PSU, Gigabyte GA-990FXA-UD3 motherboard, Asus next gen wifi card.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

... I don't understand why you use capitals so much. 

Yes, of course there are some that should not be offloaded as there is no reason for it, but what I don't think you understand is that area on the CPU is already taken up by integrated GPUs. The 3570k has the Intel HD 4000 in it. 

Effectively, what an APU is, is simply a CPU with a beefier GPU side. I really don't see your point in that statement. These things already exist and they are already pretty powerful for what they can do. You can take the space FP Units use and replace them with an integrated GPU (since they aren't the same thing, AFAIK). 

Nothing has changed.

 

No, you cannot take away FPUs inside processor die and replace them with iGPU, that's what I'm trying to tell you. There are calculations that cannot be handled by iGPU inside APUs, only FPU can handle them.

][ CPU: Phenom II x6 1045t @3,7GHz ][ GPU: GTX 660 2GB ][ Motherboard: Gigabyte GA-MA770T-UD3P ][ RAM: 8GB @1450Mhz CL9 DDR3 ][ PSU: Chieftec 500AB A ][ Case: SilentiumPC Regnum L50 ][ CPU Cooler: CoolerMaster Hyper 212 Evo & Arctic MX4 ][

Link to comment
Share on other sites

Link to post
Share on other sites

I smell a fanboy.

 

Fanboy of what? AMD? Nvidia? Intel? my grandma?

][ CPU: Phenom II x6 1045t @3,7GHz ][ GPU: GTX 660 2GB ][ Motherboard: Gigabyte GA-MA770T-UD3P ][ RAM: 8GB @1450Mhz CL9 DDR3 ][ PSU: Chieftec 500AB A ][ Case: SilentiumPC Regnum L50 ][ CPU Cooler: CoolerMaster Hyper 212 Evo & Arctic MX4 ][

Link to comment
Share on other sites

Link to post
Share on other sites

No, you cannot take away FPUs inside processor die and replace them with iGPU, that's what I'm trying to tell you. There are calculations that cannot be handled by iGPU inside APUs, only FPU can handle them.

Doubt it personally as it is illogical, and I lack the level of understanding of this subject to go based on anything else.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

 "G-sync is a good thing but it hinders performance so quality doesn't suffer thats all it does."

 this is false. g-sync doesnt limit performace, all it does is adjust the refresh rate of your monitor to the same fps level that your card is rendering. think about it, imagine that u have a monitor that is 120hz capable:
if your fps output from the gpu is lets say 75fps it will increase the frequency of your monitor to 75Hz, then it drops to 45fps, and it will adjust the frequency of your monitor to 45Hz and so on.
 its like overclocking and downclocking your monitor, to be correct its just a auto downclock of your monitor since it cant clock higher than the monitor spec. otherwise it could compromise the lifetime of the monitor itself.
 
so there is nothing caping the performace of your gpu, once the image is rendered its displayed to you.
in some cases it even increases performance, that being for example when the gpu finishes to render the image before the refresh rate of the monitor, so that there is no waiting between when the image was rendering and the refresh of the monitor, thats when u see it. thats how it eliminates lagg and shuttering.
Link to comment
Share on other sites

Link to post
Share on other sites

Well, 

AMD is aiming for 100% synergy between CPU and GPU (Thus, APU), and since they produce both, they are in the perfect position to make this happen. 

Intel and Nvidia would have to work together to achieve anything similar. Not sure the chances on that happening, but ... eh. 

I think AMD is going to become a power house in the coming years due to this synergy that they are aiming for. Synergy means more performance all around. And more performance is good. :D

Ehhh... Intel has been making APUs for much longer than AMD, and Nvidia makes SoCs since quite a long time back as well. On x86 Intel could very well achieve "100% synergy between CPU and GPU", and on mobile Nvidia is already doing it. I wouldn't really say AMD is in a better position than Nvidia and Intel for that goal.

 

 

There are 2 types of operations computers use. 

Trust me, there are far more than 2 types of operations a computer does. NOR, XOR, AND, NOT and NAND to mention a few.

I agree that APUs are a great idea, but Wolfur has a point as well. It is not as easy as just adding a GPU and then star using OpenCL and everything will be magically faster and the CPU won't have to do any FP calculations. Some programs are even struggling to support more than 2 CPU cores because of things like thread locking and dependencies, so adding 100 mores cores won't really solve such issues.

Link to comment
Share on other sites

Link to post
Share on other sites

to me AMD is dead unless mantle succeeds, at this point they are  so far behind in both cpu and gpu deparments... i dont think they can catch up

Link to comment
Share on other sites

Link to post
Share on other sites

to me AMD is dead unless mantle succeeds, at this point they are  so far behind in both cpu and gpu deparments... i dont think they can catch up

I disagree.

In the lower end market AMD usually beats Intel in terms of price:performance (the 8230 is really good for ~150 dollars, and their APUs are perfect for general purpose computers) and their new GPUs are very competitively priced. If Nvidia hadn't lowered the price on the 780 then I wouldn't really seen the point in recommending it to anyone, unless you really needed shadowplay and/or game stream for some reason.

Link to comment
Share on other sites

Link to post
Share on other sites

I think you have a few misconceptions:

"...PC market crushes console market." That is extremely far from true as far as profits or market share are concerned.
As for AMD's chips being used in consoles, several people have stated that it will not matter as much as expected for the PC market but we'll have to wait and see. Also, just to clarify, none of the consoles will be using mantle, the will actually be using DirectX11.2, it doesn't really matter in a system where the hardware is already locked down.

There has been no statement as to whether or not G-sync be exclusive to nVidia GPUs only in the future and the monitors will probably not be locked down they will simply not make use of G-sync technology if a non G-sync compatible card is detected. This ties in with Jen-Hsun's statement that G-sync should start making it into regular monitors with a very small price difference and there is the possibility of having an G-sync device to work with any monitor(I believe this was mentioned on the nVidia website although admittedly, it does require quite a bit of knowledge to mod the display, so I guess that's not very valid). Also G-sync does not require a more powerful graphics card than you would otherwise need, the performance difference with or without it should be 0 in theory.

Also with some of the most notable people in the gaming industry saying they'd rather have G-sync than mantle, I think it's more than just a "last-minute gimmick". Also I don't get how the article you linked to proves nVidia wasn't ready, in fact it just proves that nVidia were raking in all the profits while they could until AMD made their move, at which point they lowered all the prices making their cards more compelling (at least on the high end). The 780 Ti probably didn't require much R&D considering it's pretty much a Titan with less disabled parts than a 780 so they probably had that in their sleeve just in case AMD could actually provide something which could compete with their 780.

 

As for my opinions, I will hold back from talking about how mantle may or may not change anything until we actually start seeing results, I have a feeling it's one of those awesome concepts which will never actually make a difference but we'll see. As for G-sync I think it's also a great concept, but because it does not force devs to change anything with the way they develop their games and it just works, I think it could be more successful if nVidia could convince monitor manufacturers to drop the prices to normal levels as Jen-Hsun stated. I really hope nVidia decide to license this technology to AMD, I won't keep my breath until it happens but I wouldn't be that surprised considering nVidia's efforts to only keep themselves slightly ahead in order to keeping the industry alive.

 

As for intel, I think intel has proven time and time again they no longer care about the enthusiast unless they pay for their extreme cpus, the future of cpus is mostly in mobile so intel is focusing on that. Also why would they care about spending R&D on a market where they are doing so little and yet still they dominate in terms of both performance and market share. It doesn't seem like that will change unless AMD can increase their market share considerably which I doubt is going to happen any time soon unfortunately.

Also you should probably be posting this in general or graphics cards since it's neither news nor reviews.

Link to comment
Share on other sites

Link to post
Share on other sites

what u said is wrong THE NEXT GEN CONSOLES WILL NOT SUPPORT MANTLE THEY ALREADY HAVE A LOW LEVEL API!!!! Microsoft will be using a there console variation of direct x which they will be fixing the pc direct x to be more like the console direct x giving us pc guys a universal low level api so nvidia guys wont suffer from mantle.

Link to comment
Share on other sites

Link to post
Share on other sites

what u said is wrong THE NEXT GEN CONSOLES WILL NOT SUPPORT MANTLE THEY ALREADY HAVE A LOW LEVEL API!!!! Microsoft will be using a there console variation of direct x which they will be fixing the pc direct x to be more like the console direct x giving us pc guys a universal low level api so nvidia guys wont suffer from mantle.

had amd not be threatening direct x with mantle, microsoft would never done that, since PC gaming is a direct competitor to xbone.

Although i havent seen this directx modification in anywhere, do you have a source?

Link to comment
Share on other sites

Link to post
Share on other sites

I think you have a few misconceptions:

"...PC market crushes console market." That is extremely far from true as far as profits or market share are concerned.

As for AMD's chips being used in consoles, several people have stated that it will not matter as much as expected for the PC market but we'll have to wait and see. Also, just to clarify, none of the consoles will be using mantle, the will actually be using DirectX11.2, it doesn't really matter in a system where the hardware is already locked down.

 

Mantle is the implementation of low-level GPU access, as used in the XBox One and the PS4, on the PC. Mantle is a PC API, of course Mantle won't be in consoles.

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to comment
Share on other sites

Link to post
Share on other sites

had amd not be threatening direct x with mantle, microsoft would never done that, since PC gaming is a direct competitor to xbone.

Although i havent seen this directx modification in anywhere, do you have a source?

yep

 

http://linustechtips.com/main/topic/65917-amd-responds-to-xbox-will-not-use-mantle-microsoft-improving-direct3d-to-compete/

Link to comment
Share on other sites

Link to post
Share on other sites

I've said this time and time again, Nvidia knows that from now on it's going to be extremely difficult to compete with AMD on performance per mm².
The great majority of game engines will favor AMD's CPU & GPU architectures moving forward because of AMD's console wins.

Nvidia is smart, it knows that no matter what it does the AMD GPUs will perform better per mm² simply because the game code mandates it, so what does Nvidia do ? they develop a strategy in which they try to overcome the performance disadvantage by competing in different areas such as features.

This is simply why in a very short while we're seeing Nvidia coming out with things like ShadowPlay, G-Sync and GeForce experience.

I don't think that we're going to see AMD really capitalize on its console wins until next year with its next-gen 20nm class GPUs code named Pirate Islands.

For Nvidia it would have been fine for them to compete on features rather than performance, they've historically done software better than AMD while AMD excelled more at hardware but Mantle caught Nvidia by surprise, it did not expect AMD to be this aggressive leveraging its console wins.
So how Nvidia is going to handle it remains to be seen, all I know is that it's going to be very interesting sitting back watching it unfold.

Link to comment
Share on other sites

Link to post
Share on other sites

:|

There are 2 types of operations computers use. 

Floating Point Operations & Integer Operations.

CPU's, or specifically, the cores in the CPU that you have multiple of, (a 3570k has 4, an 8350 technically has 8), are only good at doing Integer Operations. They can do FP Operations (IIRC), but are much slower at it.

Integer Operations are what they sound like. Operations that only involve integers. Whole numbers. 

Floating Point Operations are what they sound like as well. Operations that involve decimals and fractions. 

A CPU needs a special unit in it called an "FP Unit" (FP for Floating Point) so that it can actually do the FP operations quickly. Otherwise, it can, but it's absurdly slow as the normal cores don't do them well.

Video games, particularly physics and tesselation related things, use TONS of Floating Point operations, for various reasons. 

And I'll let Kuzma handle it from here if he feels like it, since I don't know much beyond that. Also, Glenwing, I think I got it right. Not certain though.

Anyway, you get the idea.

Then why do all the integrated GPU/CPU's don't perform as good as non-integrated GPU/CPU's. 2011 CPU's don't have integrated GPU's and they perform much better then those with graphics. Please explain I am interested. 

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

Then why do all the integrated GPU/CPU's don't perform as good as non-integrated GPU/CPU's. 2011 CPU's don't have integrated GPU's and they perform much better then those with graphics. Please explain I am interested. 

In terms of CPU performance LGA 2011 is better simply because they have more die area dedicated to the CPU, which allows them to fit more cores, bigger caches, etc on them. If Intel didn't put a big GPU in their LGA 1155 chips the i7-4770K might have been a hexacore for example (doubt it but you get the point).

The reason why a dedicated GPU and dedicated CPU will perform better is simply because they are much much bigger, and can therefore fit more stuff in them, and therefore perform better.

 

Your post got me thinking though. Everyone has to have a GPU in their system somewhere and if you want high GPU performance then it doesn't really make any sense to have an APU because they will never be as powerful as a much larger graphics card. If you're going to have a huge graphics card in your computer then wasting space in the CPU for a small GPU doesn't really make any sense. So for high performance builds APUs doesn't make any sense. You can't really say APUs are the future for things like office PCs either, because they already are. Intel has been making APUs for ages now and basically every single laptop has an APU so saying that they are "the future for office PCs" is wrong, because that has already happened several years ago and you would be predicting the past, not the future.

 

So yeah I've changed my mind. APUs are not the future. They are the past and present for low power/low performance machines, and they are not the future for high performance machines.

Link to comment
Share on other sites

Link to post
Share on other sites

i see. Still i think mantle will be better than that version of directx. Simply by the fact that even though mantle is presented for their gpus, is directed in more ways to their cpus, so the fx architecture can benefit more of multithread suport in games. 

Link to comment
Share on other sites

Link to post
Share on other sites

In terms of CPU performance LGA 2011 is better simply because they have more die area dedicated to the CPU, which allows them to fit more cores, bigger caches, etc on them. If Intel didn't put a big GPU in their LGA 1155 chips the i7-4770K might have been a hexacore for example (doubt it but you get the point).

The reason why a dedicated GPU and dedicated CPU will perform better is simply because they are much much bigger, and can therefore fit more stuff in them, and therefore perform better.

 

Your post got me thinking though. Everyone has to have a GPU in their system somewhere and if you want high GPU performance then it doesn't really make any sense to have an APU because they will never be as powerful as a much larger graphics card. If you're going to have a huge graphics card in your computer then wasting space in the CPU for a small GPU doesn't really make any sense. So for high performance builds APUs doesn't make any sense. You can't really say APUs are the future for things like office PCs either, because they already are. Intel has been making APUs for ages now and basically every single laptop has an APU so saying that they are "the future for office PCs" is wrong, because that has already happened several years ago and you would be predicting the past, not the future.

 

So yeah I've changed my mind. APUs are not the future. They are the past and present for low power/low performance machines, and they are not the future for high performance machines.

Yea but at the end of the day that GPU space could be better used if it was used for more CPU cores. Having a GPU taking up space where a few extra CPU cores could be is really dumb. Especially when you have a dedicated GPU. One reason why people go for 2011 is because they know there is no GPU on it. My next CPU will also be a 2011 simply because I know I will be using all the cores and will have no use for an integrated GPU. No one does unless you are on a really tight budget, but even then you will be better off buying a cheap CPu and a cheap GPU.

 (\__/)

 (='.'=)

(")_(")  GTX 1070 5820K 500GB Samsung EVO SSD 1TB WD Green 16GB of RAM Corsair 540 Air Black EVGA Supernova 750W Gold  Logitech G502 Fiio E10 Wharfedale Diamond 220 Yamaha A-S501 Lian Li Fan Controller NHD-15 KBTalking Keyboard

Link to comment
Share on other sites

Link to post
Share on other sites

Your post got me thinking though. Everyone has to have a GPU in their system somewhere and if you want high GPU performance then it doesn't really make any sense to have an APU because they will never be as powerful as a much larger graphics card. If you're going to have a huge graphics card in your computer then wasting space in the CPU for a small GPU doesn't really make any sense. So for high performance builds APUs doesn't make any sense. You can't really say APUs are the future for things like office PCs either, because they already are. Intel has been making APUs for ages now and basically every single laptop has an APU so saying that they are "the future for office PCs" is wrong, because that has already happened several years ago and you would be predicting the past, not the future.

 

The reason why it would better for an APU's GPU to perform some of the operations normally performed by the CPU is because of the overhead of pushing data around the PCIe bus.

| CPU: 2600K @ 4.5 GHz 1.325V | MB: ASUS P8Z68-V pro | GPU: EVGA GTX 480 clk 865/mem 2100 | RAM: Corsair Vengeance 1600 MHz CL9 | HDD: Muskin Chronos Deluxe 240GB(win8) && ADATA SX900 120 GB(ubuntu 12.04) | PSU: Seasonic x760 |

Link to comment
Share on other sites

Link to post
Share on other sites

The reason why it would better for an APU's GPU to perform some of the operations normally performed by the CPU is because of the overhead of pushing data around the PCIe bus.

AKA, latency. 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×