Jump to content

[UPDATE 3 - Sapphire's Reference Costs $649 USD] AMD Reveals R9 Nano Benchmarks Ahead of Launch

HKZeroFive

Consider the RnD costs... so far there is sold what 30 million consoles with that AMD chipset in total? Then take 30m x15 dollars = 450m dollars.... Not a whole lot when you consider the cost of developing a new CPU alone can be over 300m dollars....

Considering that both sony and microsoft is contributing to the R&D cost.

The sweet thing about semi-custom, is you reuse alot of already existing IPs, cutting down on R&D.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

If it just had HDMI 2.0a and HEVC support. Perfect for a media/gaming livingroom PC. 

(HEVC not really needed, but would've been nice)

Ryzen 7 5800X     Corsair H115i Platinum     ASUS ROG Crosshair VIII Hero (Wi-Fi)     G.Skill Trident Z 3600CL16 (@3800MHzCL16 and other tweaked timings)     

MSI RTX 3080 Gaming X Trio    Corsair HX850     WD Black SN850 1TB     Samsung 970 EVO Plus 1TB     Samsung 840 EVO 500GB     Acer XB271HU 27" 1440p 165hz G-Sync     ASUS ProArt PA278QV     LG C8 55"     Phanteks Enthoo Evolv X Glass     Logitech G915      Logitech MX Vertical      Steelseries Arctis 7 Wireless 2019      Windows 10 Pro x64

Link to comment
Share on other sites

Link to post
Share on other sites

change your title

 

Professional AMD Hater

 

PFFFFFTTTTTT  :lol:  :lol:  :lol:  :lol:  :lol:  :lol:  :lol:  :lol:

 

ZS5J9BM.jpg

 

Though I ain't involved in this convo, that was seriously funny  :lol:

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

Update 2:

Bunch of performance claims from AMD, stating that the Nano has a 75 degree target operating temperature and is 42dBA loud. It's best to just wait a little longer and see if these claims are true.

Images are in the OP.

They were accuarate about all of the temps and noise with the fury x

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

On the graphics architecture note, unlike AMD Intel has had to design their architecture from the ground up and each generation has seen significant improvements to the point that some low end dGPU in OEM build and laptops might actually start to disappear since the newer CPU from Intel offer low power consumption and better overall performance than any of AMD's APU, which actually makes them worth the extra cost over an APU-some of which are used in computers that are actually quite expensive while performing worse when compared to a Phenom II and ATI Mobile dGPU from 2009/2010 which don't actually consume that much more power together than a single APU.

(I also noticed that the memory controller in AMD Phenom II P920 is better than that used in the A8 4555, the memory transfer rate (read and write) for 1066MHz 4GB dual channel DDR3 with the P920 is higher than that of the A8 4555 with 1600MHz 4GB Dual channel DDR3)

 

I do note it, and I do not dismiss their progress. I never said that, and that's not the point.

 

I just don't cope with the fact that the preacher is claiming that anything AMD can make, so can Intel.

 

And then he has the audacity to claim they have no value.

Link to comment
Share on other sites

Link to post
Share on other sites

On the graphics architecture note, unlike AMD Intel has had to design their architecture from the ground up and each generation has seen significant improvements to the point that some low end dGPU in OEM build and laptops might actually start to disappear since the newer CPU from Intel offer low power consumption and better overall performance than any of AMD's APU, which actually makes them worth the extra cost over an APU-some of which are used in computers that are actually quite expensive while performing worse when compared to a Phenom II and ATI Mobile dGPU from 2009/2010 which don't actually consume that much more power together than a single APU.

(I also noticed that the memory controller in AMD Phenom II P920 is better than that used in the A8 4555, the memory transfer rate (read and write) for 1066MHz 4GB dual channel DDR3 with the P920 is higher than that of the A8 4555 with 1600MHz 4GB Dual channel DDR3)

ATI is part of AMD, and while many of ATIs designers and engineers has quit, there is probably plenty of them left behind in AMD....

If anything, every Radeon product is still ATI, just released under a different branding now...

However impressive intels offerings are, they are still 3-4 generations behind AMD and Nvidias top flagship models, and will continue to be so.

Remember, what will ultimatly stop Intel is not computational power, its thermal and size restrictions... You can only cram so much into a CPU die before you end up with another FX 9590....

Link to comment
Share on other sites

Link to post
Share on other sites

ATI is part of AMD, and while many of ATIs designers and engineers has quit, there is probably plenty of them left behind in AMD....

If anything, every Radeon product is still ATI, just released under a different branding now...

However impressive intels offerings are, they are still 3-4 generations behind AMD and Nvidias top flagship models, and will continue to be so.

Remember, what will ultimatly stop Intel is not computational power, its thermal and size restrictions... You can only cram so much into a CPU die before you end up with another FX 9590....

Lol, Intel has already been through their FX 9590/CMT phase, and that was their Pentium 4's and Netburst (2000/2001-2008/2009). And never forget that there was a rather large gap between Intel's dGPU from the 90's and their first iGPU, which means that they effectively started years behind Ati/AMD and Nvidia-and while Intel will eventually run out of space (also, Intel would be competing with Nvidia and AMD with dGPU right now, but Nvidia panicked, withdrew crucial IP and prevented Intel from stepping into the graphics card martket), compare the die size of their older CPU-and indeed even Haswell-E to what is currently their mainstream CPU and try to prove to me that Intel-with their high efficiency CPU architecture-doesn't have a lot of room for performance increases.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Lol, Intel has already been through their FX 9590/CMT phase, and that was their Pentium 4's and Netburst (2000/2001-2008/2009). And never forget that there was a rather large gap between Intel's dGPU from the 90's and their first iGPU, which means that they effectively started years behind Ati/AMD and Nvidia-and while Intel will eventually run out of space (also, Intel would be competing with Nvidia and AMD with dGPU right now, but Nvidia panicked, withdrew crucial IP and prevented Intel from stepping into the graphics card martket), compare the die size of their older CPU-and indeed even Haswell-E to what is currently their mainstream CPU and try to prove to me that Intel-with their high efficiency CPU architecture-doesn't have a lot of room for performance increases.

You must have believed Patricks preaching's. Simple truth is, Larrabee failed because of Intel. They got greedy, and tried underhanded tactics with certain OEM's to get Nvidia out of their way, and it backfired tremendously. 

 

 

The only problem is, Intel sparked a war with nVidia without even having working silicon [ok, silicon capable of displaying a picture]. And that was a big mistake. The moment Jen-Hsun saw the comments made by Intel engineers and later statements by Intel execs at IDF Spring 2008 in Shanghai, Jen-Hsun "opened a can of whoop-ass" on Intel. Luckily for Intel, Jen-Hsun didn’t have the GT300 silicon either, but GT200 was at the gates.

The strained relationship between the two got into a state of war when Intel started talking to OEMs and claiming that nVidia does not have the right to create chipsets for Nehalem [QPI – Quick Path Interface] and Lynnfield [DMI – Digital Multimedia Interface]. Upon request, we were shown a cross-license deal between Intel and nVidia. I am not going to disclose which side showed it to me, since technically – the source did something it wasn’t supposed to do.

The wording in the original document, as far as my laic understanding, does not bar nVidia from making chipsets for Intel even after Front Side Bus is dead, because both QPI and DMI qualify as a "processor interconnect", regardless of what either party is saying.

Intel filed a suit against nVidia in Delaware court [naturally, since both companies are incorporated in the "Venture Capital of the World" state], claiming that nVidia doesn’t hold the license for CPUs that have integrated memory controller. nVidia didn’t stand back, but pulled a counter-suit, but this time around, nVidia wanted the cross-license deal annulled and to stop Intel from shipping products that use nVidia patents.

If you wonder why this cross-license agreement is of key importance for Larrabee, the reason is simple: without nVidia patents, there is no Larrabee. There are no integrated chipsets either, since they would infringe nVidia’s patents as well. Yes, you’ve read that correctly. The Larrabee architecture uses some patents from both ATI and nVidia, just like every graphics chip in the industry. You cannot invent a chip without infringing on patents set by other companies, thus everything is handled in a civil matter – with agreements. We heard a figure of around several dozen patents, touching Larrabee from the way how frame buffer is created to the "deep dive" called memory controller. If you end up in court, that means you pulled a very wrong move, or the pursuing company is out to get you. If a judge would side with nVidia, Larrabee could not come to market and well can you say – Houston, we have a problem?

Intel had the luck

of AMD snatching ATI – Intel and AMD have a cross-license agreement that allows for technology to transfer both ways – Intel had no problems getting a license for Yamhill i.e. AMD64, 64-bit extensions for their CPU architecture and equally should have no issues of using ATI patent portfolio [ATI and Intel already had an agreement]. My personal two cents would be going on Intel giving an x86 license to nVidia in exchange for cross-license patent, but only time will tell how the situation will develop. However, there is a reason why Bruce Sewell "retired" from arguably the best or second best legal post in the industry [iBM or Intel, we’ll leave you to pick] and then show up at Apple two days after that "retiring" e-mail.

All that this unnecessary war created was additional pressure on engineers, who had to check and re-check their every move with Larrabee, causing further delays to the program. We completely understand these people – these chips are their babies. But the additional legal pressure caused some people to leave. This is nothing weird – with projects of this size, people come and go.

http://www.vrworld.com/2009/10/12/an-inconvenient-truth-intel-larrabee-story-revealed/

 

Of course, there was also the issue of software rasterization never EVER being able to compete with any real hardware. One of Intel's Engineers goes into that in great details in this super long article. http://www.drdobbs.com/parallel/rasterization-on-larrabee/217200602

 

Then there was this. 

 

 

 

Although Intel and NVIDIA have never been “close” in a business sense, the modern sabre-rattling between the two doesn’t start until around 2008. At the time NVIDIA was moving forward with CUDA and G80 in order to gain a foothold in the high margin HPC market, while at the same time Intel was moving forward with their similarly parallel x86-based Larrabee project. In the FTC case we saw the fallout of this, as the FTC charged Intel with misrepresenting Larrabee and for lack of better words badmouthing NVIDIA’s GPGPU products at the same time.

http://www.anandtech.com/show/4122/intel-settles-with-nvidia-more-money-fewer-problems-no-x86

 

Patrick can try to paint Intel out as the hero, and Nvidia as the villian, but Intel was doing dirty tactics back then. This was not uncommon either, as Intel also got in trouble for fabricating benchmarks during the Pentium days. http://www.extremetech.com/computing/193480-intel-finally-agrees-to-pay-15-to-pentium-4-owners-over-amd-athlon-benchmarking-shenanigans

 

The point is, Intel is not as noble a cause as Patrick's Crusade would have you believe. Both Intel and Nvidia dragged each other through the mud after both sides failed to get what they wanted. Intel attacked Nvidia's reputation, Nvidia attacked Intel's chance at competing with them. 

 

@patrickjp93 Don't worry babe, at least i still love you.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Reading the Tom's Hardware description my first reaction was; "What a fantastic piece of engineering, but what a stupid price."

 

Then I remembered that AMD is having supply issues with HBM. They are going to easily sell out all the Fiji cards which they can produce. So in that environment they may as well keep the price high and make as much money as possible.

Link to comment
Share on other sites

Link to post
Share on other sites

@MageTank

Not to mention, that the effect of retracting the agreement, nvidia would first have to sue Intel, go through a court, then have Intel stop using their IP.

Something that can take up to multiple years. The larrabee project was already delayed alot, IIRC, and would have disappointing performance, compared to what was on the market.

 

Larrabee was a stillborn project for graphics.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

Lol, Intel has already been through their FX 9590/CMT phase, and that was their Pentium 4's and Netburst (2000/2001-2008/2009). And never forget that there was a rather large gap between Intel's dGPU from the 90's and their first iGPU, which means that they effectively started years behind Ati/AMD and Nvidia-and while Intel will eventually run out of space (also, Intel would be competing with Nvidia and AMD with dGPU right now, but Nvidia panicked, withdrew crucial IP and prevented Intel from stepping into the graphics card martket), compare the die size of their older CPU-and indeed even Haswell-E to what is currently their mainstream CPU and try to prove to me that Intel-with their high efficiency CPU architecture-doesn't have a lot of room for performance increases.

I was referring to the TDP of the 9590, not the huge failure that CMT is.. But that too i spose..

 

Efficient architecture they have indeed. Wont argue.

However, there IS signs already.. Documented signs.

 

Haswell Refresh (4690k) vs Broadwell 5675C vs Skylake 6600k...

88W TDP -> 65W TDP -> 91W TDP

 

The Broadwell is stronger in both iGPU and CPU then haswell. Its iGPU is much stronger then Skylake... yet a weaker iGPU inside skylake. I think intel learned their lesson by cheaping out on the TMI in the original Haswell. So i bet it is FAR from that causing 3w higher TDP..

Yes i know, its 3W, big deal... But they shrunk the node... generally we see higher efficiency and lower TDP from this, broadwell went from 84w to 65w TDP, yes broadwell is clocked lower then the Haswell and Skylake CPU, however it has a beefier iGPU at higher frequencies..

 

And this is the interesting part..

Skylake features GT2... it has 4 more EUs then Haswell, but is clocked at a lower frequency... the CPU is clocked pretty similar (except higher boost)... BUT, Skylake does NOT feature the FIVR inside the chip... Which is what "helped" increase the TDP of Haswell. That means that to gain 4EUs, even at lower clocks, and to gain higher IPC, intel must have had to increase their TDP notably.... how much? I do not know. I do not know what the TDP of a "non FIVR" haswell would be..... But this means that their new 14nm architecture, despite having removed a notable heatsource from the die, STILL gets hotter at almost identical frequencies...

 

 

So what is there to learn?

Either Intel made a mistake somewhere in their design too late in the process to fix with this iteration of Skylake, causing extra heat buildup. Or the extra CUs and changes to increase IPC has truly added more heat.

 

They are still at 91W for an i5... Its not a big deal. Kaveri is rated at 95W (realistically a stock Kaveri is like 86-89W under load).... FX is rated between 125 and 220W....

Intel has still not reached any sort of "tipping point". And they will not reach it for a while i bet..

 

 

The other sign of WHY intel cannot beat AMD or Nvidia with their iGPU is based upon Carrizo.

I do not know how much YOU know, but ive read up almost every review there is of it, i got excited and hoped it would come to desktop. It will not.

The reason it will not, is because to gain all the features and performance, they had to apply super high density design for the iGPU and parts of the CPU. The result of this is that Carrizo will see negative performance vs kaveri once you go beyond 35W... At this point, leakage and heat stops it.

 

Now, if AMD, whith their extensive knowledge of how to build CPUs and GPUs (their knowledge of marketing and economics aside), cannot overcome certain issues, then apart from a node shrink, there is little likelihood Intel will be any better suited to face these ordeals. There is NO questioning that between Nvidia, AMD and Intel, AMD and Nvidia is far more knowledgable at high performance GPU design then Intel. After all, they have been at it for decades, while intel spent a decade just adding it so people had a backup solution....

 

Intel will hit a wall, when, i do not know, but they will not beat or compete with AMD or Nvidia outside of very low end graphics... I would be surprised if Intel, even within the next 10 years, would be able to reach "current gen 950" with any of their iGPUs without said SKU demanding an AIO or full custom loop to cool it....

 

Even then, the price of such a SKU, from having a motherboard that would handle the TDP for extended periods of time, to having a chip that would handle the TDP for a extended period of time.... the price would be phenomenal... It would certainly be in the range of how much the FX 9590 costs to set up.

Link to comment
Share on other sites

Link to post
Share on other sites

I was referring to the TDP of the 9590, not the huge failure that CMT is.. But that too i spose.

4GHz Pentium 4 with a 220W TDP. Intel decided that it consumed too much power and put out too much heat to run properly in most computers, so they scrapped it and Netburst.

 

Also, the rated TDP does not mean they put out that amount of heat (and the same goes for power). As for the apparently higher levels of heat, Intel's gone back to their old shitty TIM since its cheaper than the stuff used with Devil's Canyon, and they can't solder the IHS on without destroying transistors.

 

I'll re-read the rest tomorrow, but a lot of that seems to be rambling and contradicting yourself.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Comparing to the MiniItx 970 LMFAO thats good marketing!!!!

They know it has issues (bad minimums/avgs) around 4K after 1440p, so yeah lets use that, we'll look epic.

I see it as 290x with less power required to get there, which IS great.

Maximums - Asus Z97-K /w i5 4690 Bclk @106.9Mhz * x39 = 4.17Ghz, 8GB of 2600Mhz DDR3,.. Gigabyte GTX970 G1-Gaming @ 1550Mhz

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

Intel will hit a wall, when, i do not know

 

That time has already come. They promised that going from Broadwell to Skylake would be like going from Prescott to Conroe. http://wccftech.com/intels-broadwell-skylake-uarch-transition-big-prescott-conroe/

 

Clearly that did not happen, as going from Broadwell to Skylake was a step backwards in gaming performance, and barely a step anywhere above Haswell in everything else. The only thing Skylake has going for it, are the Z170 features. Its platform is amazing, but the CPU is nothing new. Granted, they could have been talking about the mobile SKU's or even the Xeon's, we don't really know yet. All i do know is that with all the delays we have seen (Broadwell was delayed for a long time, and now Cannonlake is being delayed) I think it is safe to say Intel has run out of unicorn dust. The magic is slowly disappearing. This is why AMD is primed to catch up with Zen.

 

If Zen matches the IPC of Haswell, then it is going to be legit. Not only will it compete with the Z97 platform, it will also compete with the X99 platform at the same time. It will be offering quad channel memory support, the same storage solutions, the ability to quad SLI without a PEX 8747 bridge (means cheaper boards) and 8 cores/16 threads for people that want to game AND work at the same time. If what Keller promises is true, then Intel is in for a world of hurt, and Patrick's crusade won't be able to do anything to stop it, much to his dismay.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

That time has already come. They promised that going from Broadwell to Skylake would be like going from Prescott to Conroe. http://wccftech.com/intels-broadwell-skylake-uarch-transition-big-prescott-conroe/

 

Clearly that did not happen, as going from Broadwell to Skylake was a step backwards in gaming performance, and barely a step anywhere above Haswell in everything else. The only thing Skylake has going for it, are the Z170 features. Its platform is amazing, but the CPU is nothing new. Granted, they could have been talking about the mobile SKU's or even the Xeon's, we don't really know yet. All i do know is that with all the delays we have seen (Broadwell was delayed for a long time, and now Cannonlake is being delayed) I think it is safe to say Intel has run out of unicorn dust. The magic is slowly disappearing. This is why AMD is primed to catch up with Zen.

 

If Zen matches the IPC of Haswell, then it is going to be legit. Not only will it compete with the Z97 platform, it will also compete with the X99 platform at the same time. It will be offering quad channel memory support, the same storage solutions, the ability to quad SLI without a PEX 8747 bridge (means cheaper boards) and 8 cores/16 threads for people that want to game AND work at the same time. If what Keller promises is true, then Intel is in for a world of hurt, and Patrick's crusade won't be able to do anything to stop it, much to his dismay.

I'll put it this way, I'd only consider Zen if my Sabertooth MKII (new) died taking my i7 4790K with it (also new). Because I could afford to save up by simply switching back to my (POS when it comes to features) H87M-Pro and i5 4440. Granted some of the features are rather compelling. However they way things stand AMD can not afford to screw up any part of it. From the architecture right down to marketing (which they suck at).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

marketing (which they suck at).

Saw this today, its old, but people may not have seen this gold, so had to add it here, cos your post reminded me of it (marketing)

 

Years later AMD release their reference 290/290x line....lol

Maximums - Asus Z97-K /w i5 4690 Bclk @106.9Mhz * x39 = 4.17Ghz, 8GB of 2600Mhz DDR3,.. Gigabyte GTX970 G1-Gaming @ 1550Mhz

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'll put it this way, I'd only consider Zen if my Sabertooth MKII (new) died taking my i7 4790K with it (also new). Because I could afford to save up by simply switching back to my (POS when it comes to features) H87M-Pro and i5 4440. Granted some of the features are rather compelling. However they way things stand AMD can not afford to screw up any part of it. From the architecture right down to marketing (which they suck at).

You are 100% correct. This is AMD's last "All or nothing" shot. If they fail to deliver, then its the end for them. However, seeing the block graph, i know it will be an improvement over the FX lineup, because its impossible not to. I only hope it can deliver Haswell performance as far as IPC goes. If it does, they can finally compete again. In an ideal world, AMD would sell ATI to Intel, and compete only in CPU's. Trying to take on Nvidia and Intel at the same time is very taxing on their engineers.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Saw this today, its old, but people may not have seen this gold, so had to add it here, cos your post reminded me of it (marketing)

 

Years later AMD release their reference 290/290x line....lol

Yeah...its easy to imagine what GPU was being referenced there (GTX 480 Fahrenheit anyone?). Though it definitely suits the 290X.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

You are 100% correct. This is AMD's last "All or nothing" shot. If they fail to deliver, then its the end for them. However, seeing the block graph, i know it will be an improvement over the FX lineup, because its impossible not to. I only hope it can deliver Haswell performance as far as IPC goes. If it does, they can finally compete again. In an ideal world, AMD would sell ATI to Intel, and compete only in CPU's. Trying to take on Nvidia and Intel at the same time is very taxing on their engineers.

Except AMD generally does a better job of competing with NVIDIA. They have the performance to keep NVIDIA honest and influence pricing etc.

They haven't competed with the Intel juggernaut for a long time.

Link to comment
Share on other sites

Link to post
Share on other sites

You are 100% correct. This is AMD's last "All or nothing" shot. If they fail to deliver, then its the end for them. However, seeing the block graph, i know it will be an improvement over the FX lineup, because its impossible not to. I only hope it can deliver Haswell performance as far as IPC goes. If it does, they can finally compete again. In an ideal world, AMD would sell ATI to Intel, and compete only in CPU's. Trying to take on Nvidia and Intel at the same time is very taxing on their engineers.

Having to do too much at once wouldn't be a problem if AMD hadn't acted like idiots and bought Ati when they were in debt. The financial side of things is were AMD has totally lost the plot. (CMT was the cheapest and easiest way to make a CPU that on the surface appeared to an upgrade over the predecessors).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Except AMD generally does a better job of competing with NVIDIA. They have the performance to keep NVIDIA honest and influence pricing etc.

They haven't competed with the Intel juggernaut for a long time.

AMD offers superior graphics hardware, but their market share is abysmal. Their software team is also not on par with Nvidia's. CPU wise, AMD still has a foothold in consoles and mobile markets. If they can get back into the consumer CPU market, they might make a big enough splash to get peoples attention again. They are just not making large enough strides to pickup the GPU shares. Granted, i am very impressed with AMD cards lately, given the idea of HBM, and now a very tiny, very powerful R9 Nano. Clearly AMD is thinking outside of the box, which is a good thing.

 

I just do not see them being able to battle at both fronts. Picking one specific area to focus on and doing it better than all others would be ideal. Rather than being second best at two things.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Having to do too much at once wouldn't be a problem if AMD hadn't acted like idiots and bought Ati when they were in debt. The financial side of things is were AMD has totally lost the plot. (CMT was the cheapest and easiest way to make a CPU that on the surface appeared to an upgrade over the predecessors).

the idiocy was paying twice the value of ATI to acquire it... if they had settled at only 4.5 billions, a billion less, then their payment plan would have been much easier to manage and thus they would be in WAY better shape now....

Link to comment
Share on other sites

Link to post
Share on other sites

AMD offers superior graphics hardware, but their market share is abysmal.

They don't have the market share true. But considering where they stand in terms of their IP, technology, performance, track record etc it's relatively higher probability to see a future for them competing with NVIDIA than it is for them to compete with Intel. How often in the last 20 years have they put out products which kept Intel honest?

It's probably a moot point anyway because the future is graphics and CPU integrated. They need both.

Link to comment
Share on other sites

Link to post
Share on other sites

They don't have the market share true. But considering where they stand in terms of their IP, technology, performance, track record etc it's relatively higher probability to see a future for them competing with NVIDIA than it is for them to compete with Intel. How often in the last 20 years have they put out products which kept Intel honest?

It's probably a moot point anyway because the future is graphics and CPU integrated. They need both.

I guess i had not thought of that. Them losing their APU rights, which would also effect their mobile and console markets. Good point.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Having to do too much at once wouldn't be a problem if AMD hadn't acted like idiots and bought Ati when they were in debt. The financial side of things is were AMD has totally lost the plot. (CMT was the cheapest and easiest way to make a CPU that on the surface appeared to an upgrade over the predecessors).

CMT could have done it too if they hadn't boned themselves by cutting corners, Make every facet of CMT the best you can and it would have been competitive with Intel from the get go. But, they let too many parts of their new arch be lesser performing or were willing to trade cost for performance in the design phase.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×