Jump to content

[updated] AMD lowers revenue expectations because of poor APU sales - Q2 earnings call

zMeul

If AMD dies early, Intel would rapidly buy up ATI, and the FTC would have to see about who gets x86_64. The only two big companies interested in it (to me anyway) are Nvidia and maybe Apple. Nvidia needs to differentiate and find more markets to serve, because Intel is heavily pressuring IBM and it in the HPC space, and that's Nvidia's biggest money maker. The low end of dGPUs are being pressured too, so unless multiadaptor in DX 12 becomes rapidly ubiquitous, that market will pretty much die off in the next couple years thanks to both Intel and AMD. If Nvidia gets x86_64, it will be able to challenge Intel in mobile SOCs very rapidly with its Denver IP. Even if it takes a while to extract the new CPU IP it gets from AMD, it already has Denver sitting there which gave Apple fits. 64-bit x86 Denver would be a very strong but low power tablet/2-in-1 SOC. Widen it to 4 cores and Intel would have some trouble in laptops and the market for Pentiums/Celerons as well. It would also put Nvidia in a prime position since the major compilers are still far more heavily focused on and designed to optimize x86 than ARM.

Intel getting the ATI graphics IP... I leave that to your imagination with Intel still being a node ahead of everyone else and with a far bigger manufacturing scale than TSMC.

If AMD lives long enough for Intel to corner Nvidia and pressure its shareholders into a buyout, IBM will be defunct, Oracle will be helpless, Xilinx will have no benefactor (other than Oracle which doesn't really use FPGAs anyway), and AMD will still be financially crippled and wide open. At that point Intel could literally drive them into the dirt the legal, competitive way, and it would have no legal threats for antitrust violations. That is a doomsday monopoly scenario unlike any other in the industry. And before you ask about Samsung, the FTC would never allow a foreign firm to hold power over the most ubiquitous instruction set in the world and the U.S.. I doubt Qualcomm wants to try to pick up that smoldering heap AMD has become either, and despite its server inclinations, Qualcomm is not interested in low volume markets like PC.

Intel must be shitting their pants everytime an acquisition-rumour about AMD pop up.

If Intel was nearly as interested as you are even suggesting, there are other ways to get their hand on it.

Why on earth would they not secure something like this? If it meant the world to Intel?

Nvidia is more interested in ARM. I doubt they have have the bigger interest in x86. It simply doesn't offer the big advantages in the markets they are targetting their SoCs.

Apple is more likely to have an upscaled ARM core to unify their entire productline.

Nvidia don't need x86 to challenge Intel with their low-power SoC. The market segment those are targetting gives no real advantage to x86.

And on with the stories..

Well, I will leave you with your fantasy stories.

They are fun to read, but it is purely fantasy at this point..

You convinced me that AMD dying early would be the best thing for the market. Since apparently they cant become big and strong on their own anymore, thats done.

 

Well, i was gona buy AMD stuff in the future, just to support, but i guess imma get the stuff with the better value tho.

Great..
Link to comment
Share on other sites

Link to post
Share on other sites

Intel must be shitting their pants everytime an acquisition-rumour about AMD pop up.

If Intel was nearly as interested as you are even suggesting, there are other ways to get their hand on it.

Why on earth would they not secure something like this? If it meant the world to Intel?

Nvidia is more interested in ARM. I doubt they have have the bigger interest in x86. It simply doesn't offer the big advantages in the markets they are targetting their SoCs.

Apple is more likely to have an upscaled ARM core to unify their entire productline.

Nvidia don't need x86 to challenge Intel with their low-power SoC. The market segment those are targetting gives no real advantage to x86.

And on with the stories..

Well, I will leave you with your fantasy stories.

They are fun to read, but it is purely fantasy at this point..

Great..

Because Intel wants the crown jewel. It wants to take down Nvidia so it can have all of that IP including the Denver items. If Nvidia is taken down, no one will be able to compete against it. And no, Nvidia wants X86 first and foremost. We saw that with Denver K1 and the demonstrated x86 emulation that Intel rapidly stopped. RISC is never going to beat CISC, and Nvidia knows this.

ARM offers no distinct advantages anyway. In terms of power and efficiency, Intel and AMD have both proven you can bring x86 down to the same power envelopes at the same performance as ARM and that's with Intel's rigid performance first design philosophy. And x86 still has one great advantage. The optimization engines for compilers are still miles ahead for x86 due to ARM's lack of ubiquity and thus lack of incentive to chase. Even the LLVM community has heavily slowed down on the ARM pursuits.

It's not fantasy. It's logic. Pure, cold, evidence-based logic.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Because Intel wants the crown jewel. It wants to take down Nvidia so it can have all of that IP including the Denver items. If Nvidia is taken down, no one will be able to compete against it. And no, Nvidia wants X86 first and foremost. We saw that with Denver K1 and the demonstrated x86 emulation that Intel rapidly stopped. RISC is never going to beat CISC, and Nvidia knows this.

ARM offers no distinct advantages anyway. In terms of power and efficiency, Intel and AMD have both proven you can bring x86 down to the same power envelopes at the same performance as ARM and that's with Intel's rigid performance first design philosophy. And x86 still has one great advantage. The optimization engines for compilers are still miles ahead for x86 due to ARM's lack of ubiquity and thus lack of incentive to chase. Even the LLVM community has heavily slowed down on the ARM pursuits.

It's not fantasy. It's logic. Pure, cold, evidence-based logic.

Okay crown jewel, by putting it all at stake?

Again, if you think a take-down it the only way, you're simply a delusional.

Sure, there are many companies that have something that interest Intel. Doesn't mean the will take over the whole thing.

Nvidia was looking at x86 in the mid 2000, but again, companies put up many projects, and some are doomed to fail.

If they truly wanted it, they properbly could have been more aggressive in their crosslicense-agreement.

Again, how much is it really worth for the company? Not enough, clearly.

What is the difference between CISC and RISC? A stage in the pipeline? CISC are RISC internally, and RISC are becoming more CISC-a-like. (from the early stages of CISC)

ARM does have the benefit in ultra-low-power. Scale it up, and the advantage are smaller.

ARM 64bit is much newer, and doesn't have the x86 "bloat". A more minimalist design.

The only things Intel have proved is that a lot of paperwork and shovel ton of money, still wont make your product any more competitive.

The whole issue with Intels TDP actually been SDP, and Intels low-end processors are known to overdraw their TDP (SDP) by large quantities.

How big of an advantage is their compiler optimization really in those marketsegment?

Logic? Not detected

Pure, cold, evidence-based logic? Yet, you have presented zero evidence.

Link to comment
Share on other sites

Link to post
Share on other sites

What is the difference between CISC and RISC? A stage in the pipeline? CISC are RISC internally, and RISC are becoming more CISC-a-like. (from the early stages of CISC)

ARM does have the benefit in ultra-low-power. Scale it up, and the advantage are smaller.

ARM 64bit is much newer, and doesn't have the x86 "bloat". A more minimalist design.

The only things Intel have proved is that a lot of paperwork and shovel ton of money, still wont make your product any more competitive.

The whole issue with Intels TDP actually been SDP, and Intels low-end processors are known to overdraw their TDP (SDP) by large quantities.

How big of an advantage is their compiler optimization really in those marketsegment?

CISC and RISC are instruction sets. They determine the instructions that the cores are able to run into the pipeline. ARM currently does have that benefit, however Intel and AMD have proved that if they better the x86 architecture, then the advantage will continue to diminish. The ISAs probably don't have anything to do with how power efficient something is, since it might just be the architecture. Can't really say that ARM is better if the actual architecture just happens to be more efficient than architectures on a x86 chip.

Also, in terms of x86 being more optimized might be due to the fact that all development starts on the PC (which usually carries an x86 or similar). ARM is not really present on PCs and I don't think there are any good ways of development on a mobile device, which probably makes it more reasonable that compilers for x86 are better. That is just speculation on that point though.

Link to comment
Share on other sites

Link to post
Share on other sites

CISC and RISC are instruction sets. They determine the instructions that the cores are able to run into the pipeline. ARM currently does have that benefit, however Intel and AMD have proved that if they better the x86 architecture, then the advantage is no longer there. The ISAs probably don't have anything to do with how power efficient something is, since it might just be the design of the chip. Can't really say that ARM is better if the actual chip is just more efficient than a x86 chip.

Also, in terms of x86 being more optimized might be due to the fact that all development starts on the PC (which usually carries an x86 or similar). ARM is not really present on PCs and I don't think there are any good ways of development on a mobile device, which probably makes it more reasonable that compilers for x86 are better. That is just speculation on that point though.

CISC and RISC are not instruction sets as you would think x86. It is design choice.

ARM have the benefit of overall much smaller, power-effecient, and less bottleneck-prone decoders. This is the only advantage, else they could be identical.

If we discuss RISC and CISC we dont include anything else. Else the comparison it off, and there are to many things to change the outcome , so you will end up with a high percentage-error.

The development could happen everywhere. They could be using a linux server running on ARM hardware. But that is a good point, and I'm not quite sure much it will/can effect things.

Link to comment
Share on other sites

Link to post
Share on other sites

CISC and RISC are not instruction sets as you would think x86. It is design choice.

ARM have the benefit of overall much smaller, power-effecient, and less bottleneck-prone decoders. This is the only advantage, else they could be identical.

If we discuss RISC and CISC we dont include anything else. Else the comparison it off, and there are to many things to change the outcome , so you will end up with a high percentage-error.

The development could happen everywhere. They could be using a linux server running on ARM hardware. But that is a good point, and I'm not quite sure much it will/can effect things.

They pretty much are instruction sets, no? CISC (Complex Instruction Set Computer) and RISC (Reduced ....) both determine the sets of instructions that the computer are receiving. Because of the instruction sets, of course the design would have to change. We don't have chips with the same architecture design and can both handle CISC and RISC, so there really is no proof that CISC or RISC is better than the other since the problem could exist in how the instruction sets are used instead of what instruction sets they are using. Statistically speaking, there are too many uncontrolled variables between ARM and x86 processors that the problem may actually be linked elsewhere.

It's like taking two different runners and saying that one runner's way of running is better because he is running in a certain movement and beat the other runner. There are too many uncontrolled variables such as what they eat, metabolism, environment, and such. So what I'm saying is ARM does have that benefit, but it is unclear of whether the benefit comes from RISC vs CISC or elsewhere.

Link to comment
Share on other sites

Link to post
Share on other sites

They pretty much are instruction sets, no? CISC (Complex Instruction Set Computer) and RISC (Reduced ....) both determine the sets of instructions that the computer are receiving. Because of the instruction sets, of course the design would have to change. We don't have chips with the same architecture design and can both handle CISC and RISC, so there really is no proof that CISC or RISC is better than the other since the problem could exist in how the instruction sets are used instead of what instruction sets they are using. Statistically speaking, there are too many uncontrolled variables between ARM and x86 processors that the problem may actually be linked elsewhere.

It's like taking two different runners and saying that one runner's way of running is better because he is running in a certain movement and beat the other runner. There are too many uncontrolled variables such as what they eat, metabolism, environment, and such. So what I'm saying is ARM does have that benefit, but it is unclear of whether the benefit comes from RISC vs CISC or elsewhere.

No, they are not. CISC, is its original term, means that you will have complex instructions which can be decoded to internal instructions (very similar to RISC instructions).

CISC != x86 and RISC != ARM.

MIPS also goes under RISC, as many others instruction sets.

 

 

RISC have a theoretical advantage at low power. Is that better?  :P

Link to comment
Share on other sites

Link to post
Share on other sites

No, they are not. CISC, is its original term, means that you will have complex instructions which can be decoded to internal instructions (very similar to RISC instructions).

CISC != x86 and RISC != ARM.

MIPS also goes under RISC, as many others instruction sets.

 

 

RISC have a theoretical advantage at low power. Is that better?  :P

 

 

ARM is a family of instruction set architectures for computer processors developed by British company ARM Holdings, based on a reduced instruction set computing (RISC) architecture.

A RISC-based computer design approach means ARM processors require significantly fewer transistors than typical complex instruction set computing (CISC) x86 processors in most personal computers.

https://en.wikipedia.org/wiki/ARM_architecture

 

f you are one of the few hardware or software developers out there who still think that instruction set architectures, reduced (RISC) or complex (CISC), have any significant effect on the power, energy or performance of your processor-based designs, forget it.

 
Ain't true. What is more important is the processor microarchitecture — the way those instructions are hardwired into the processor and what has been added to help them achieve a specific goal.
...
"Based on this study, developers can safely consider ARM, MIPS, and x86 processors simply as engineering design points optimized for different levels of performance," he said. “There's nothing fundamentally more energy-efficient in one ISA class versus another."

http://www.eetimes.com/author.asp?section_id=36&doc_id=1327016

 

What you said makes no sense at all. If CISC deems the instructions as more complex, then CISC and RISC both detemine the set of instructions of the architecture... ISA (Instruction Set Architecture) determines the set of instructions being used. https://en.wikipedia.org/wiki/Instruction_set also states them as classifications of instructions sets, further reinforcing the idea that they change the instructions sets...

In my first quote you can also see that x86 is a CISC based architecture and ARM is a RISC based architecture if you just search what they are... Second quote reaffirms what I said as to how CISC vs RISC doesn't matter since it is how the instructions are being done instead of how the instructions are actually set, in which proves that ISAs (CISC or RISC) don't matter. Showing that the theory has already been debunked.

Link to comment
Share on other sites

Link to post
Share on other sites

https://en.wikipedia.org/wiki/ARM_architecture

http://www.eetimes.com/author.asp?section_id=36&doc_id=1327016

 

What you said makes no sense at all. If CISC deems the instructions as more complex, then CISC and RISC both detemine the set of instructions of the architecture... ISA (Instruction Set Architecture) determines the set of instructions being used. https://en.wikipedia.org/wiki/Instruction_set also states them as classifications of instructions sets, further reinforcing the idea that they change the instructions sets...

In my first quote you can also see that x86 is a CISC based architecture and ARM is a RISC based architecture if you just search what they are... Second quote reaffirms what I said as to how CISC vs RISC doesn't matter since it is how the instructions are being done instead of how the instructions are actually set, in which proves that ISAs (CISC or RISC) don't matter. Showing that the theory has already been debunked.

I was wondering what all of the babbling was about. My understanding is that CISC based CPU have a lot of instructions on them, while RISC had fewer instructions.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

https://en.wikipedia.org/wiki/ARM_architecture

http://www.eetimes.com/author.asp?section_id=36&doc_id=1327016

 

What you said makes no sense at all. If CISC deems the instructions as more complex, then CISC and RISC both detemine the set of instructions of the architecture... ISA (Instruction Set Architecture) determines the set of instructions being used. https://en.wikipedia.org/wiki/Instruction_set also states them as classifications of instructions sets, further reinforcing the idea that they change the instructions sets...

In my first quote you can also see that x86 is a CISC based architecture and ARM is a RISC based architecture if you just search what they are... Second quote reaffirms what I said as to how CISC vs RISC doesn't matter since it is how the instructions are being done instead of how the instructions are actually set, in which proves that ISAs (CISC or RISC) don't matter. Showing that the theory has already been debunked.

RISC and CISC are more patterns or philosophies in the design of instruction sets. RISC used to mean the only instructions you'd get are your simple arithmetic and simple push/pop off the stack, or the frame of memory being used by your program from its exit point down. It meant compilers had far more work to do to extract performance, but that work was easier because you could cycle count each instruction and come up with an accurate optimization model based on how good your branch predictor is and how instruction-level parallelism could work to fold multiple instructions' execution over one another. In RISC the opcodes (binary representation of assembly instructions) going into the CPU are executed exactly.

 

In CISC, and especially in x86, that water gets very murky very fast. You can't just use the clock cycle latencies to design your optimization rules for your compilers, because in CISC systems, you can have very complicated instructions about register rotations with fused multiply-add operations, and you're allowed to transparently shift around the results without the programmer or user ever seeing it happen. Your big instructions get broken down very rapidly by an engine that determines at that moment in time into what is optimal and will least likely produce a stall of the pipeline. Your command to move data from registers EAX, EBX, ECX into EDX, ESI, and EDP may actually have 1 or both ends completely switched around based on where data was placed before and how to move it optimally. In CISC, a crapton of stuff happens in the chip which assembly can't express. In RISC, you know exactly what you're getting.

 

Now, ARM can claim it's RISC all it wants, but pure RISC isn't supposed to have an out of order processing engine to reorder instructions (without modifying how they actually work mind you), and between the secure unit processor and other items it just doesn't really fit that mold anymore, but the fact remains you still know exactly what you're getting with the ARM ISA if you count cycles and examine the assembly like a madman. In x86 there's no guarantee your command of move EBX to EAX will move data from EBX or move data to EAX, because every instruction is broken down and translated/reconfigured for the context of that specific moment in time and that specific stage of the pipeline and state of the CPU. That's the only big difference remaining between RISC and CISC these days.

 

Now, to take the CISC philosophy to its absolute extreme, we have the Nvidia Denver architecture which originally worked with x86 without Intel's permission. I'm sure we know how that went down. Denver takes a known ISA such as ARM and has a built-in program running on one core of the dual-core chip which looks through the instructions it gets and translates them to an unknown, proprietary Nvidia ISA for its cores, but that translated code is examined by the program in-line to be optimized even further than the original compiler could do, and then the new instructions are placed in an "optimization cache" of about 128MB if I remember correctly which the second core then reads from and executes. While there is no specific term for this, Nvidia's actual architecture could be considered VLIW (Very Long Instruction Word) which has all the benefits of CISC and then some, but an enormous drawback: making a compiler for VLIW instruction sets is a near impossible task. Nvidia found a way around it and made the entire system context-driven, which is why its 2-core Denver K1 Tegra can outshine the Apple A8 and still compete well against the A8X even though the A8X wins by having 3 cores in multithreaded workloads.

 

Now I'm sure @vm'N has something to quip about, but this is the bulk of it. CISC has seemed to prove it is the better choice over RISC for performance, and RISC has so far managed to hold the power use advantage, though I chalk that up to Intel simply not investing in HDL or truly changing its paradigm of performance first in its approach to the Atom SOCs. It may turn out RISC is really no better for this purpose either in the end. We'll have to wait and see. VLIW may turn out to be the king of performance, but it has to run through a more understandable ISA first.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder if now is the time to invest in AMD.... If zen turns out to be great it could lead to a profit, if not however....... (I dont have the money anyways)

Back when Tesla released the Roadster I remember being like "this is a company I would invest in" look at their stock now...

Lord of Helium.

Link to comment
Share on other sites

Link to post
Share on other sites

https://en.wikipedia.org/wiki/ARM_architecture

http://www.eetimes.com/author.asp?section_id=36&doc_id=1327016

 

What you said makes no sense at all. If CISC deems the instructions as more complex, then CISC and RISC both detemine the set of instructions of the architecture... ISA (Instruction Set Architecture) determines the set of instructions being used. https://en.wikipedia.org/wiki/Instruction_set also states them as classifications of instructions sets, further reinforcing the idea that they change the instructions sets...

In my first quote you can also see that x86 is a CISC based architecture and ARM is a RISC based architecture if you just search what they are... Second quote reaffirms what I said as to how CISC vs RISC doesn't matter since it is how the instructions are being done instead of how the instructions are actually set, in which proves that ISAs (CISC or RISC) don't matter. Showing that the theory has already been debunked.

I think you are missing the point.

CISC nor RISC doesn't determine the set of instructions the architecture can processes.

It tells something about the instruction set, but as I said, it is not an actual instruction set as you would think of x86 or ARM.

You can think of CISC and RISC as a branch/category of instruction sets that follow a certain paradigm. Not an actual instruction set.

@patrickjp93 did cover it through.

We cant assume that such an essential step in the pipeline wont have any withdrawings in regards to low power (special regards to ultra low power) where everything matters in terms of power consumption. This is an advantage, however, other things can change the outcome.

Obvious it have some effect, but to what degree, can quickly change depended on everything else.

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

Nvidia Denver seems like it would take a painful project though. As you said, RISC brings the problem of optimization to the compilers, whereas CISC would handle that with its instruction set. Even if Denver did do most of the optimization, wouldn't that still be a huge drawback? Seems like there would be a performance hit since the other core would have to wait for the optimizing core to finish. Would be interesting if we saw more research put into it, which won't happen until Intel gets a hold of it. Sounds more like a RISC based optimizer and a CISC based executor.

 

I think you are missing the point.

CISC nor RISC doesn't determine the set of instructions the architecture can processes.

It tells something about the instruction set, but as I said, it is not an actual instruction set as you would think of x86 or ARM.

You can think of CISC and RISC as a branch/category of instruction sets that follow a certain paradigm. Not an actual instruction set.

CISC/RISC are very general ideas on how the design of the instruction sets will work. I never said that they were actual instruction sets that had to be used. Now if CISC/RISC determine the design of the instruction sets, then it fundamentally determines the instruction set being used... You cannot really have an architecture that takes both CISC and RISC instruction sets, and if we did, I would think that this very combination would complicate both the compiler and architecture even further. The ultimate design pattern of how something would work determines how that something will actually work. Sorta like if you used X design pattern for an objects in software, it would determine how those objects would work.

Link to comment
Share on other sites

Link to post
Share on other sites

CISC/RISC are very general ideas on how the design of the instruction sets will work. I never said that they were actual instruction sets that had to be used. Now if CISC/RISC determine the design of the instruction sets, then it fundamentally determines the instruction set being used... You cannot really have an architecture that takes both CISC and RISC instruction sets, and if we did, I would think that this very combination would complicate both the compiler and architecture even further. The ultimate design pattern of how something would work determines how that something will actually work. Sorta like if you used X design pattern for an objects in software, it would determine how those objects would work.

I think we are starting to go into semantics.

My first reply to you, I already addressed it, in an somewhat proper way.

My point is, that you cannot pinpoint the instruction set, based alone if it is RISC or CISC.

So that is what is meant when I say, RISC/CISC does not determine the instruction set.

What makes you think that? Have the executable run on a different abstraction level, and have some sort of intepreter to translate it to something the underlying hardware can understand. By far not easy, of course, but theoretically (in the same fashion), that is how x86 runs today.

You might have to run some sort of prefix or something to differentiate between the different ISA.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×