Jump to content

AMD reports best quarterly profit in 7 years thanks to new products

ItsMitch

andi am pretty sure that hawai didn't consume 300w thats more like what a fiji gpu did 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, cj09beira said:

andi am pretty sure that hawai didn't consume 300w thats more like what a fiji gpu did 

True and since Hawaii has hard TDP Limits, it depends on the Manufacturer...

Some Cards like the HIS 290 one are rather efficient.

 

Also core Temperature has a huge impact in power consumption. I think the breaking point was somewhere between 70 and 80°C, where 5-10°C less makes something like 20W Difference or so...

 

 

And again, Hawaii had a wider Memory Interface and more RAM -> 512bit (vs 384bit) and 4, later 8 GiB RAM.

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

For those of you calling on AMD to make a serious attempt at beating Nvidia

 

Radeon 5870 launched September 23rd 2009 at  379 dollars

Geforce 480 launched March 26th 2010 at 499 dollars

 

Thats not a month, or two months. There is a SIX MONTH GAP between the two cards.

 

5870

+ Runs cooler, 95 vs 77 degrees Celsius

+ Consumes less power, 254 vs 188 watts

+ Is cheaper, Launch priced differed by about 100-120USD

Sourced from Guru3d review articles from 2009/2010. Both cards were tested in stock configuration.

 

The 480 was 6 months late, cost 100USD more, performed worse, ran hotter and consumed more heat. Still Nvidias market at the first half of 2010 was at about 54%, which somehow improved to nearly 59% by the end of the year. Even when Nvidia release a clearly atrocious product, far behind schedule and more expensive than the competition. They grab shares. The 480 was even bigger than the 5870, at 529mm^2 for Nvidia and 334mm^2 for Radeon.

 

Fanboys have been seeking a monopoly for well over a decade now, and you finally succeeded. You finally did it, Nvidia has monopoly on enthusiast graphics cards. AMD can't be bothered even attempting to break into a market that has shown a persistent wish for a monopoly. And now you have your result coming in. You're not getting better graphics cards, you're not getting cheaper graphics cards. What you're getting is stagnation. The 1000 series graphics card have been by far the longest running Nvidia series since the 200 series.

 

If you had your wish during the Geforce 8000 series era, we would be in the middle of the 600 series at the pace we are moving forward at this point.

Motherboard: Asus X570-E
CPU: 3900x 4.3GHZ

Memory: G.skill Trident GTZR 3200mhz cl14

GPU: AMD RX 570

SSD1: Corsair MP510 1TB

SSD2: Samsung MX500 500GB

PSU: Corsair AX860i Platinum

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, MMKing said:

-Snip.-

Strange, considering:

The only real kick in the pants I saw in that generation was that within six months, NVIDIA released the GTX 580 for $500. What was AMD's counter? The HD 6970, which was at best as good as the GTX 570.

 

Looking at the history of GPU releases, they both have a back and forth between leap frogging each other.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, M.Yurizaki said:

Looking at the history of GPU releases, they both have a back and forth between leap frogging each other.

From memory the times where ATI/AMD were significantly ahead was the X800/X1000 generation and the 7970 (though less so than X800/X1000 but great none the less). Both have released turds more than once, consumers really only have short term memories though. I mean how many people care about X800 or the 7970 or even R9 200, old and irrelevant now and nothing revolutionary enough for people to remember.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, leadeater said:

From memory the times where ATI/AMD were significantly ahead was the X800/X1000 generation and the 7970 (though less so than X800/X1000 but great none the less). Both have released turds more than once, consumers really only have short term memories though. I mean how many people care about X800 or the 7970 or even R9 200, old and irrelevant now and nothing revolutionary enough for people to remember.

I recall the R400 series starting off strong but falling back as DX9.0c was being favored over 9.0b, then both companies were pretty equal for a while

 

But yeah, you know what they say: you're only good as your last performance

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, M.Yurizaki said:
  • Most of the time the GTX 480 was faster than the HD 5870 and a little bit slower than the HD 5970

That justifies +100W in idle? And not much less under load?

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, M.Yurizaki said:

Do we fucking care anymore?

Only if AMD consumes more it seems...

if it is the other way around, nobody seem to care...

 

Because:

https://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/19

 

The GTX400 series didn't even have Idle Clockrates. AMD introduced that with the HD3k Series AFAIR, though it didn't work or wasn't implemented on the GDDR4 (or was it 5?) cards, IIRC the HD3850 and 4850 had something like that..

 

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Stefan Payne said:

Like Skylake-X, yet nobody cares about that with Skylake-X.

 

Oh, again with the Driver Instability Lie? If you don't have any other Argument, you get that skeleton out of the closet?

Becuse we ignore all the Problems that nVidia had. Like dying Cards in WoW, crashes in the Original Tomb Raider that weren't fixed for Months (AFAIR it took them half a year to fix them).

 

So why don't you say anything about the shitty nVidia Driver or the Incompetence when there is a Problem?

 

Yes and has 2GiB more VRAM and 64bit wider Memory Interface...

And AMD Ryzen is more Efficient than Coffee Lake and consumes less power Also something that isn't mentioned much. 


Sorry, but you are just justifying to pay more for the shittier card...

 

And since you are talking about Efficiency, do you have a Ryzen CPU?

People did care about Skylake X, der8auer and the hot VRMs/cables, the ~800W consumption when overclocked and the use by motherboard manufacturers of a 4GHz all core boost instead of the Intel specification which lead to insane power consumption values. If at stock Skylake X is not too bad and consumes the same as TR.

https://www.anandtech.com/show/11839/intel-core-i9-7980xe-and-core-i9-7960x-review/14

 

I never said NVIDIA had rock solid drivers, I said that in the first few months Crimson was unstable as well as lacking in features (until they scrapped CCC).

 

If the VRAM is such a key component why does the 4GB RX X80 not wreck the 8GB version by boosting higher?

It has an effect but not one which you would notice in real life.

MVIDIA also is using more advanced color compression so can save on bandwidth.

 

Yes I didn't mention it because I was replying about a post of AMDs GPUs. They are competitive to an extent, but mainly if you are mining due to the higher compute performance.

 

I do not.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Humbug said:

 

i saw that earlier, very good news, that extra money is probably going to be use for zen 3 and the next gen gpus 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, ScratchCat said:

People did care about Skylake X, der8auer and the hot VRMs/cables, the ~800W consumption when overclocked and the use by motherboard manufacturers of a 4GHz all core boost instead of the Intel specification which lead to insane power consumption values.

You mean the 'Hot Things Get Hot Saga'?

9 hours ago, leadeater said:

From memory the times where ATI/AMD were significantly ahead was the X800/X1000 generation and the 7970 (though less so than X800/X1000 but great none the less). Both have released turds more than once, consumers really only have short term memories though. I mean how many people care about X800 or the 7970 or even R9 200, old and irrelevant now and nothing revolutionary enough for people to remember.

Be gone with your logic! This is LTT, not an angle whore house lead by a black preacher!

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Drak3 said:

Be gone with your logic! This is LTT, not an angle whore house lead by a black preacher!

This can only call for a type of post that you just love so much from me :).

 

5002181c1ff69ed72ba36d090a4dedf3--cheesy-jokes-corny-jokes.jpg

 

This is your Pun ishment.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/25/2018 at 4:49 PM, AnonymousGuy said:

Before anyone gets too excited: Intel makes > $4 billion in net income per quarter.  More than 3x what AMD makes as revenue.

 

AMD is going under / going to be acquired,  it's just a matter of when.

What kind of baloney are you trying to spin here besides pure fanboyism? People said the exact same thing in 2016, and look at where we are now. No, AMD is definitely not going under, and saying otherwise is just pure wishfulness.

Lappy: i7 8750H | GTX 1060 Max Q | 16Gb 2666Mhz RAM | 256Gb SSD | 1TB HDD | 1080p IPS panel @60Hz | Dell G5

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/26/2018 at 7:35 PM, leadeater said:

From memory the times where ATI/AMD were significantly ahead was the X800/X1000 generation and the 7970 (though less so than X800/X1000 but great none the less). Both have released turds more than once, consumers really only have short term memories though. I mean how many people care about X800 or the 7970 or even R9 200, old and irrelevant now and nothing revolutionary enough for people to remember.

The Radeon 9700 and 9800 (Pro) demolished anything Nvidia had at the time, apparently. Though at the time, I played primarily on the PlayStation 1/2, so I've little personal experience with that gen.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

On 27.7.2018 at 9:01 AM, ScratchCat said:

People did care about Skylake X,

Yet still people are buying that shit, despite the enormous power consumption and the Lid not being soldered and having shitty TIM.

 

On 27.7.2018 at 9:01 AM, ScratchCat said:

I never said NVIDIA had rock solid drivers, I said that in the first few months Crimson was unstable as well as lacking in features (until they scrapped CCC).

Must have been your imagination and its already ancient.

So why Mention Skeletons in Closets?!
 

Oh and lets talk about Final Fantasy 7/8, Initial Windows Release for Windows 9x...

And there are also other Examples that didn't run well on nV at all...

 

The Driver thing is just utter bullshit that we as Endusers can not really comment on, yet many people try to comment on that and present nVidia in a better light than they deserve.

 

On 27.7.2018 at 9:01 AM, ScratchCat said:

If the VRAM is such a key component why does the 4GB RX X80 not wreck the 8GB version by boosting higher?

What the hell?!
More VRAM is more important than higher clocks.
See Geforce TI420 for reference. The 64MiB Version was clocked higher than the 128MiB Version, yet later the 64MiB Version had no chance to compete in later games because the Performance was crippled due to lack of VRAM.

 

So yeah, the 1 or two GiB VRAM that AMD had/has were important. As you can see in modern games, how well Tahiti still performs. And how much they are beating the old more expensive, Kepler Cards...

 

On 27.7.2018 at 9:01 AM, ScratchCat said:

MVIDIA also is using more advanced color compression so can save on bandwidth.

Yes and they throw money at the drivers with their Optimization.

You could say that they are cheating, according to a posting from someone in the 3DCenter Forum who mentioned that there were objects missing after they send the code to nVidia.

 

On 27.7.2018 at 9:01 AM, ScratchCat said:

Yes I didn't mention it because I was replying about a post of AMDs GPUs. They are competitive to an extent, but mainly if you are mining due to the higher compute performance.

No, also for Gaming. They are competitively priced and the performance is on par or better with the 1060 while offering more RAM.

 

 

There is nothing wrong with Polaris at all.

On 27.7.2018 at 9:01 AM, ScratchCat said:

I do not.

So you only talks about power consumption/efficiency, when its beneficially to you?

And if Intel has Higher Power Consumption, its fine.

 

Or what do you mean?

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Zodiark1593 said:

The Radeon 9700 and 9800 (Pro) demolished anything Nvidia had at the time, apparently. Though at the time, I played primarily on the PlayStation 1/2, so I've little personal experience with that gen.

Yes, because at the Time, ATi was ahead of the Time.

And you could argue that the 9700 and 9800 is not much more than a fixed and slightly improved version of the Radeon 8500 (R200 Chip), just with doubled excecution units.

 

And that was one of the reasons the R300 was so superior was that it was just a wider architecture. If you compare the FX5800 with the Radeon 9500 and 9600, it doesn't look that bad for nVida ;)

 

But still they had the Problem that the TMU and Shaders were dependant on each other, while that was not the case on ATi. So on the ATi Architecutre, neither TMU Operation could stall the Shaders nor Shader Operation the TMUs. While exactly that was the case with the CineFX Architecture. TMU/Shader were on one Pipeline.

 

Also at the Time, nVidia was seen to do some rather shady stuff to optimize the Performance of their Architectures like optimizing 

operations that had a minimum required precision of 24bit to only 16bit and other stuff. Just look up the 3DMark cheats from nVidia at the time.

The R300 architecture worked with 24bit Precision.

 

And AMD kept the Architecture mostly unchanged until the X850. With the x1 Series they had to modify it but you could argue that its still the same architecture, even if it was a full DX9 one.

 

Then the R600/Terrascale Architecture came wich stayed with us until the HD6k Series  (with Improvements of course)...

 

And right now people are speculating if AMD might go back to a VLIW Architecture with Navi or something similar.

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/27/2018 at 9:25 PM, Humbug said:

 

This is what I have been waiting for for some time now.  The old saying in business, you need money to make money.

 

To the the rest of this thread: it is almost sad what some people believe. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/26/2018 at 11:22 PM, cj09beira said:

andi am pretty sure that hawai didn't consume 300w thats more like what a fiji gpu did 

It did, they specifically added the "Uber" BIOS setting to allow the card to cope with the heat produced on the reference card (Allowed higher fan speed)

image.png.1b3038926d217d9ffc82c5785d5f65d8.png

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/27/2018 at 1:31 PM, Drak3 said:

You mean the 'Hot Things Get Hot Saga'?

Be gone with your logic! This is LTT, not an angle whore house lead by a black preacher!

Yes, the jokes/memes were cringeworthy :

"I hear Intel are helping the fusion research groups in America, with all the money they have they should have fusion going within a year"

"Actually are sending i9 7980XEs to heat the plasma, if we get hold of the 28 core 5GHz SKUs we'll be done in 5 weeks"

 

Also you need tl update your title.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, SpencerC said:

What kind of baloney are you trying to spin here besides pure fanboyism? People said the exact same thing in 2016, and look at where we are now. No, AMD is definitely not going under, and saying otherwise is just pure wishfulness.

Just ignore it, it's like when the AMD is dead tech news topic was necroposted and we ended up with 4 pages of people insulting others about 2 year old posts.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, SpencerC said:

What kind of baloney are you trying to spin here besides pure fanboyism? People said the exact same thing in 2016, and look at where we are now. No, AMD is definitely not going under, and saying otherwise is just pure wishfulness.

The difference is that in 2016 many agencies/economists were reporting AMD as having a very high probability of going bankrupt (altmen Z score was in the distressed zone for 5 years), it wasn't forum rhetoric. The fact they didn't is amazing, not expected.

 

Nowadays to say the same thing is a little more naive for sure.  But lets not undermine legitimate postulation with hindsight.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Stefan Payne said:

Yet still people are buying that shit, despite the enormous power consumption and the Lid not being soldered and having shitty TIM.

There are those who buy it because of the performance (it is much better in games than what AMD offers due to the ability to boost higher, however the gap is narrowing. Compared to Threadripper its performance in AVX tasks is by far superior) or because they know no better.  I am not saying it is a good product (just look at the VROC key required for RAID) but there are valid reasons to purchase it.

19 hours ago, Stefan Payne said:

Must have been your imagination and its already ancient.

So why Mention Skeletons in Closets?!
 

Oh and lets talk about Final Fantasy 7/8, Initial Windows Release for Windows 9x...

And there are also other Examples that didn't run well on nV at all...

 

The Driver thing is just utter bullshit that we as Endusers can not really comment on, yet many people try to comment on that and present nVidia in a better light than they deserve.

I used Crimson, after a few months it became rather good but the initial builds caused a multitude of crashes and the expected features such as Wattman were only introduced for the other cards a while later. I am not defending NVIDIA as they really have a terribly slow menu system, I am merely saying that in those first few months AMD had the less reliable driver.

 

You are comparing Crimson (2 years ago) to something almost 2 decades ago, AMD still uses the silicon design from that time (Polaris) in the majority of their products today.

 

Why should end users not be able to comment on drivers? Without the drivers the product they sell would hardly be functional and no one would accept a non-functional product.

19 hours ago, Stefan Payne said:

What the hell?!
More VRAM is more important than higher clocks.
See Geforce TI420 for reference. The 64MiB Version was clocked higher than the 128MiB Version, yet later the 64MiB Version had no chance to compete in later games because the Performance was crippled due to lack of VRAM.

 

So yeah, the 1 or two GiB VRAM that AMD had/has were important. As you can see in modern games, how well Tahiti still performs. And how much they are beating the old more expensive, Kepler Cards...

Clocks are more important so long sufficient VRAM is present to avoid slow downs. You are right that if the VRAM doesn't suffice (the 6GB of the 1060 sometimes do not, the 3GB versions always has issues) the performance of the chip itself will no longer matter.

However I am talking about non-VRAM limited situations where the power gap between 4GB and 8GB would allow the 4GB version to boost higher if power limited, I never said that the memory capacity did not matter but rather that the difference in power you claimed due to the 2GB more VRAM on the 480 vs the 1060 is most likely negligible. This part was a misunderstanding it seems.

20 hours ago, Stefan Payne said:

Yes and they throw money at the drivers with their Optimization.

You could say that they are cheating, according to a posting from someone in the 3DCenter Forum who mentioned that there were objects missing after they send the code to nVidia.

Throwing money at the drivers is reasonable if it leads to noticeable improvement.

Either NVIDIA are cheating in a non-visible way (almost certainly people would have detected this by now if it were visible), this was only in a certain situation or lead to the solution desired i.e. removing excessive particle effects or reusing other object (Skyrim uses the same object for single height cupboards as well as double and triple height by moving the triple height version into the floor in order to reduce memory usage).

20 hours ago, Stefan Payne said:

No, also for Gaming. They are competitively priced and the performance is on par or better with the 1060 while offering more RAM.

I never said they were not competitive, I said they are competitive to an extent with approximately same performance, same price, higher power consumption, more VRAM, exceptional performance in Vulkan/DX12 games, poor NVIDIA Gameworks support (then again Gameworks is simply evil, nothing to do with AMD), generally meh performance in NVIDIA TWISMTBP games (see Gameworks comment) and finally excellent compute performance.

 

TLDR - Compute and Vulkan cause Polaris to shine, otherwise the performance is approximately the same for a higher power consumption.

20 hours ago, Stefan Payne said:

There is nothing wrong with Polaris at all.

I once again never said Polaris was bad, only that the efficiency was worse than that of Pascal.

20 hours ago, Stefan Payne said:

So you only talks about power consumption/efficiency, when its beneficially to you?

And if Intel has Higher Power Consumption, its fine.

 

Or what do you mean?

Could you explain how not owning a Ryzen chip means I only talk about power consumption and efficiency when it suits me?

 

Efficiency is what counts, if CPU X consumes 2x the power of CPU Y but performs 4x better X would be superior in almost all cases.

 

 

Back to the point about how AMD would not be purchased even if it were better:

  • Miners were buying container loads of RX X80s because they were simply brilliant to mine with for the price of both purchase and electricity spent.
  • AMD CPUs are being placed into prebuilts
  • AMD GPUs are being built into Apple products (Would Apple buy inferior products if they had the choice?)
  • Intel's CEO publicly admitted they would loose server market share to AMD, if people were not going to buy AMD would Intel really admit such a thing to their investors?

 

 

21 hours ago, Stefan Payne said:

Must have been your imagination and its already ancient.

So why Mention Skeletons in Closets?!

A more recent comparison would have been the GTX 680 vs GTX 780 with minimum framerates. 

21 hours ago, Stefan Payne said:

See Geforce TI420 for reference. The 64MiB Version was clocked higher than the 128MiB Version, yet later the 64MiB Version had no chance to compete in later games because the Performance was crippled due to lack of VRAM.

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 25.7.2018 at 11:49 PM, AnonymousGuy said:

Before anyone gets too excited: Intel makes > $4 billion in net income per quarter.  More than 3x what AMD makes as revenue.

yep and its too bad intel got off the hook way too easy for their shady business practices which put them into this position and pushed AMD down the drain.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×