Jump to content

Intel reports flat revenues and lower year-on-year profits for Q3

Mr_Troll

Intel has turned in its financial results for the third quarter of 2015. The company took in $14.5 billion in revenue, down less than one percent versus a year ago, and it enjoyed $4.2 billion in operating income, down about 8% year-over-year. Earnings per share were 64 cents, down two cents compared to a year ago.

 

The Client Computing Group took in $8.5 billion in revenue, down 7% year-over-year. According to the company's CFO commentary, desktop platform volume (accounting for both processors and chipsets) fell 19% from Q3 2014, but average selling prices (ASP) rose by 15%. Notebook platform volume fell 14% from a year ago, but ASP rose 4%. Tablets were hardest-hit: platform volumes fell 39% year-on-year.

 

espite the stormy conditions in the client computing sector, Intel's other divisions delivered brighter results. The Data Center Group took in $4.1 billion, a 12% increase. The Internet of Things Group delivered $581 million in revenue, a 10% increase, and the company's software and services operations took in $556 million, about the same as this time last year.

For the fourth quarter of 2015, Intel expects $14.8 billion in revenue (plus or minus $500 million), gross margin of about 62 percent, and about $5 billion in spending divided among R&D and marketing, general, and administrative costs.

 

Rip Intel?  No of course . JK. They are still making a lot of money .

 

Source : http://techreport.com/news/29188/intel-reports-flat-revenues-and-lower-year-on-year-profits-for-q3

http://files.shareholder.com/downloads/INTC/855348371x0x854355/1DE7E7D0-9205-4E29-848D-A842339DBAFB/Earnings_Release_Q3_2015_5_.pdf

Intel Core i7 7800x @ 5.0 Ghz with 1.305 volts (really good chip), Mesh OC @ 3.3 Ghz, Fractal Design Celsius S36, Asrock X299 Killer SLI/ac, 16 GB Adata XPG Z1 OCed to  3600 Mhz , Aorus  RX 580 XTR 8G, Samsung 950 evo, Win 10 Home - loving it :D

Had a Ryzen before ... but  a bad bios flash killed it :(

MSI GT72S Dominator Pro G - i7 6820HK, 980m SLI, Gsync, 1080p, 16 GB RAM, 2x128 GB SSD + 1TB HDD, Win 10 home

 

Link to comment
Share on other sites

Link to post
Share on other sites

Meh, they're fine regardless as they still have tons of loyal customers and company contracts that keep them in the black, plus their net income was 3 billion so they made at least 2 billion in profits

https://linustechtips.com/main/topic/631048-psu-tier-list-updated/ Tier Breakdown (My understanding)--1 Godly, 2 Great, 3 Good, 4 Average, 5 Meh, 6 Bad, 7 Awful

 

Link to comment
Share on other sites

Link to post
Share on other sites

ehrmagawd intel is in a death spiral...

oh noes, what will we do if only AMD is left... CPUs will be waayyy too cheap.

Don't worry, by the time Intel dies, AMD will be non existent. :P

CPU: AMD Ryzen 7 3800X Motherboard: MSI B550 Tomahawk RAM: Kingston HyperX Predator RGB 32 GB (4x8GB) DDR4 GPU: EVGA RTX3090 FTW3 SSD: ADATA XPG SX8200 Pro 512 GB NVME | Samsung QVO 1TB SSD  HDD: Seagate Barracuda 4TB | Seagate Barracuda 8TB Case: Phanteks ECLIPSE P600S PSU: Corsair RM850x

 

 

 

 

I am a gamer, not because I don't have a life, but because I choose to have many.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thats what they get for resselling sandy bridge 4 years in a row and using R&D for iGPU that no one uses except some unfortunate mobile pleb users.

The x86 market in general has seen better days you dont hear much about it lately only mobile this mobile that.

Link to comment
Share on other sites

Link to post
Share on other sites

The main problem Intel and all the other consumer computer companies are having with sales right now is that the consumers don't need faster computers.  If you have a computer bought in the last 3-5 years, and don't do any heavy gaming or anything specifically tasking, you don't need a more powerful computer.  Hard to tell people they need a newer faster computer when they are able to do all the things they want to do.  They aren't being stopped from browsing the web, or watching youtube, or reading email, or writing papers.  Gamers don't even push CPU limits really, not in a meaningful way that CPUs can do anything about, more GHz would be nice but we have physics issues preventing that, not marketing issues.  Intel saw plenty of sales in the workstation/server markets cause they still need more more more, but consumers don't need more anything for their non-mobile platforms.

 

It also doesn't help that CPUs have been slight improvements every 12-16 months.  Not like the days of a newer faster CPU being released within a week or two of you ordering the fastest one on market.  The MHz wars are done, and frankly the average consumer doesn't need more yet.  Even if DX12/Vulcan start maxing out all cores in future games, gamers aren't likely to turn the market around.  We might buy large amounts of certain "sweet spot" CPUs, but we don't push the larger consumer market.

Link to comment
Share on other sites

Link to post
Share on other sites

The main problem Intel and all the other consumer computer companies are having with sales right now is that the consumers don't need faster computers.  If you have a computer bought in the last 3-5 years, and don't do any heavy gaming or anything specifically tasking, you don't need a more powerful computer.  Hard to tell people they need a newer faster computer when they are able to do all the things they want to do.  They aren't being stopped from browsing the web, or watching youtube, or reading email, or writing papers.  Gamers don't even push CPU limits really, not in a meaningful way that CPUs can do anything about, more GHz would be nice but we have physics issues preventing that, not marketing issues.  Intel saw plenty of sales in the workstation/server markets cause they still need more more more, but consumers don't need more anything for their non-mobile platforms.

 

It also doesn't help that CPUs have been slight improvements every 12-16 months.  Not like the days of a newer faster CPU being released within a week or two of you ordering the fastest one on market.  The MHz wars are done, and frankly the average consumer doesn't need more yet.  Even if DX12/Vulcan start maxing out all cores in future games, gamers aren't likely to turn the market around.  We might buy large amounts of certain "sweet spot" CPUs, but we don't push the larger consumer market.

 

Incorrect (imo of course): I seriously doubt that this figures are for consumer desktop and laptop chips only, or are you gonna seriously gonna tell us we haven't needed faster servers or workstation since 2010-2013?

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

Guess Intel nolonger is giving away their tablet chips?  :P

 

Are we sure this isn't evidence of Intels death rattle? /s

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

Incorrect (imo of course): I seriously doubt that this figures are for consumer desktop and laptop chips only, or are you gonna seriously gonna tell us we haven't needed faster servers or workstation since 2010-2013?

Server demand has also been softer, but that has more to do with scaling problems on both the software side and on interconnects. No point in building a machine theoretically more powerful than the Tianhe-II if you can't get better real world performance. Notice Skylake and KNL bring Omnipath and Omniscale fabric to Intel's side. IBM's relying on NVLink for its heterogeneous scaling, and it already had better CPU scaling than Intel.

No point in buying better hardware on one end that's choked on the other.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

And I thought my support for intel would help them out a bit as I just got an intel cpu over amd. Oh well, as long as they aren't really hurting things are good.

Link to comment
Share on other sites

Link to post
Share on other sites

Guess Intel nolonger is giving away their tablet chips?  :P

 

Are we sure this isn't evidence of Intels death rattle? /s

That actually was a subject of the earnings call. Contra revenue is ending in stages. Sales prices were already at-cost for some of the mid-range and most of the low-end tablet space. Now prices will begin to rise closer to alignment with Intel's other pricing models. Now that the ecosystem has moved so squarely in x86's direction, Intel can afford to play the price game with ARM.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

They could release that 69 core i7 that they've been holding back and they'd make all the dollar bills ya'll.

CPU: i5-4690k GPU: 280x Toxic PSU: Coolermaster V750 Motherboard: Z97X-SOC RAM: Ripjaws 1x8 1600mhz Case: Corsair 750D HDD: WD Blue 1TB

How to Build A PC|Windows 10 Review Follow the CoC and don't be a scrub~soaringchicken

 

Link to comment
Share on other sites

Link to post
Share on other sites

I would assume it is because there is no real reason to upgrade with the marginal increases in performance. (plus the declining laptop market)

Lord of Helium.

Link to comment
Share on other sites

Link to post
Share on other sites

I would assume it is because there is no real reason to upgrade with the marginal increases in performance. (plus the declining laptop market)

It has much more to do with server/supercomputer slowdown. Until new interconnects are in place it doesn't make much sense to scale much farther for scale-out servers, and IBM still has scale-up on its side the whole way.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Incorrect (imo of course): I seriously doubt that this figures are for consumer desktop and laptop chips only, or are you gonna seriously gonna tell us we haven't needed faster servers or workstation since 2010-2013?

 

I would say that yes, less server chips have been needed.  Virtualization has drastically reduced the amount of systems people buy.  And remote/cloud hosting has also cut down what people need.  When I was a server admin last year, we were in the final stages of consolidating our server systems and going virtual as much as possible.  We reduced our need from 30+ individual systems to 4 systems.  So while we were buying systems that were easily 2-3 times as expensive, we were buying 1/10th as many systems.  Many companies and groups have been reducing the amount of systems they need to buy as virtualization has improved.

 

On that same note though, the server world does still want more powerful CPUs, but they want them more powerful so they can use less of them.  This doesn't count data-centers that can't get enough of anything, but they aren't the "consumer" group of the server system purchases.  Overall, there is less demand for quantity, and more demand for quality.

Link to comment
Share on other sites

Link to post
Share on other sites

I would say that yes, less server chips have been needed.  Virtualization has drastically reduced the amount of systems people buy.  And remote/cloud hosting has also cut down what people need.  When I was a server admin last year, we were in the final stages of consolidating our server systems and going virtual as much as possible.  We reduced our need from 30+ individual systems to 4 systems.  So while we were buying systems that were easily 2-3 times as expensive, we were buying 1/10th as many systems.  Many companies and groups have been reducing the amount of systems they need to buy as virtualization has improved.

 

On that same note though, the server world does still want more powerful CPUs, but they want them more powerful so they can use less of them.  This doesn't count data-centers that can't get enough of anything, but they aren't the "consumer" group of the server system purchases.  Overall, there is less demand for quantity, and more demand for quality.

 

You certainly make a good effort at rationalizing your oversight, I'll give you that.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

Thats what they get for resselling sandy bridge 4 years in a row and using R&D for iGPU that no one uses except some unfortunate mobile pleb users.

The x86 market in general has seen better days you dont hear much about it lately only mobile this mobile that.

Lol No. Intel isnt developing the iGPU for us pleb consumers. intel is toughening up in the HPC market. Going after Tesla. First Knights landing Xeon Phi, and when ready, a true GPU architecture. ;)

 

also there is only so much they can do when their IPC is around 0.9, with the theorethical maximum being 1. They can add newer vector instructions, that noone uses but HPC people. And they did. They added AVX 512 to Skylake. for those that will use them. 

 

The consumer world needs to catch up to intel software wise, before intel has any reason to really work on their execution speed. so its great they are working on power efficiency, because x86 is bloated to fuck with legacy crap, and they need to sort that out

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Lol No. Intel isnt developing the iGPU for us pleb consumers. intel is toughening up in the HPC market. Going after Tesla. First Knights landing Xeon Phi, and when ready, a true GPU architecture. ;)

 

also there is only so much they can do when their IPC is around 0.9, with the theorethical maximum being 1. They can add newer vector instructions, that noone uses but HPC people. And they did. They added AVX 512 to Skylake. for those that will use them. 

 

The consumer world needs to catch up to intel software wise, before intel has any reason to really work on their execution speed. so its great they are working on power efficiency, because x86 is bloated to fuck with legacy crap, and they need to sort that out

If IPC cant exceed 1, then why would they design such a wide core? Haswell have 8 ports to be utilized. Seems like a kind of waste.

Vector instructions works will with certain workloads. Can't fix it all.

 

Intel have implemented so much power gating in their design over the last few iterations. It is not that big of an deal, to certain extends. But get down in a low enough power-envolope, and some of its bloat will effect its power/performance.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

If IPC cant exceed 1, then why would they design such a wide core? Haswell have 8 ports to be utilized. Seems like a kind of waste.

Vector instructions works will with certain workloads. Can't fix it all.

 

Intel have implemented so much power gating in their design over the last few iterations. It is not that big of an deal, to certain extends. But get down in a low enough power-envolope, and some of its bloat will effect its power/performance.

Because they can do much more with a wide core whose average IPC is 0.9 than with a narrow core whose average ipc is 0.9?

 

Im not only talking about vectors. they added TSX. they added AES NI, they improved FMA to a point where they almost dont need separate MUL and ADDs anymore. im sure ive missed one or two, @patrickjp93?

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Because they can do much more with a wide core whose average IPC is 0.9 than with a narrow core whose average ipc is 0.9?

 

Im not only talking about vectors. they added TSX. they added AES NI, they improved FMA to a point where they almost dont need separate MUL and ADDs anymore. im sure ive missed one or two, @patrickjp93?

If both cores average IPC is the same, you don't do more stuff, considering its is running the same software.

It is a old myth that IPC cant exceed 1. Sure, that certain softwares sustainable IPC wont exceed 1, but that is because the bottleneck on that workload lies somewhere else (lots of I/O operations, branches, dependecies, decoding, instruction fetching, data fetching, and so on).

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

If both cores average IPC is the same, you don't do more stuff, considering its is running the same software.

It is a old myth that IPC cant exceed 1. Sure, that certain softwares sustainable IPC wont exceed 1, but that is because the bottleneck on that workload lies somewhere else (lots of I/O operations, branches, dependecies, decoding, instruction fetching, data fetching, and so on).

Not true. If their IPC is the same but their instruction sets aren't equialent, you get different results. If the IPC average is the same but the balancing for individual equivalent instructions is different, you end up with different performance.

 

It's not a myth at all. IPC has an absolute limit of 1. Where muops come in and how instructions themselves do multiple things is a separate issue altogether. No instruction can execute in less than 1 cycle. That law is absolute. Since IPC is literally CPI's reciprocal, this is inescapable.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

If both cores average IPC is the same, you don't do more stuff, considering its is running the same software.

It is a old myth that IPC cant exceed 1. Sure, that certain softwares sustainable IPC wont exceed 1, but that is because the bottleneck on that workload lies somewhere else (lots of I/O operations, branches, dependecies, decoding, instruction fetching, data fetching, and so on).

not that its faster, ts more flexible

 

And no. Its a phyiscal (well computational law). IPC can not exceed 1. you cant finish every instruction in less then one cycle. Average IPC can, in some cases, and pipelineing combined with superscalar will make it seem as if an instruction finished in less then one cycle, but in reality, it just cant. thats a law.

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

not that its faster, ts more flexible

 

And no. Its a phyiscal (well computational law). IPC can not exceed 1. you cant finish every instruction in less then one cycle. Average IPC can, in some cases, and pipelineing combined with superscalar will make it seem as if an instruction finished in less then one cycle, but in reality, it just cant. thats a law.

And to add to this, instruction-level parallelism and superscalar scaling both have asymptotic bounding laws as well.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×