Jump to content

Intel plans job cuts across the company, internal memo says and reduce R&D spending

jos

They can sell  low end card... I am getting a PC for my nephew.. i will get him a skylake processor and chepo DX12.1 graphics.. along with win 10.. now he can play games with graphics better than skylake as it can use both due to multi adaptor. and get goodness of 12.1 at the same time

No one's going to develop for DX 12.1, just as no one developed for 11.1.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

They can if and only if they can get Denver going in pure ARM and being competitive again. They'd just have to convince everyone to compile to ARM again, which isn't difficult to do using Clang, but w/e.

 

Then where are the GTX 950 and below? It's not like the 5775C trails the GTX 750 by much, and that's a Maxwell card.

They cant. because we are stuck with x86. and we will stay there until atleast Apple sees an opportunity to switch to arm fully on their desktop SKUs. then we will have powerful arm chips and an incentive to develop for them. only a matter of time till MS compiles Windows for it.

 

I guess they dont see the need for it yet? since the broadwell parts are barely making their way to the market i guess, but if they feel threatened, they will respond.

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah but youre forgetting the low end 700 series is still mostly fermi ^^ nvidia could easily make them maxwell and much more powerful to compete with intel for anyone who needs an dGPU. for the others, no matter how powerful their gpus get, they cant compete with intel since they dont have a CPU design

 

They can if and only if they can get Denver going in pure ARM and being competitive again. They'd just have to convince everyone to compile to ARM again, which isn't difficult to do using Clang, but w/e.

 

Then where are the GTX 950 and below? It's not like the 5775C trails the GTX 750 by much, and that's a Maxwell card.

 

The problem is, i dont think no way intel and amd going to license amd64 and i386 instruction set to nvidia. They hsve to use ARM along with CUDA for processing.. and it is virtually impossible to convince microsoft and apple to recreate OS for the new one. 

Link to comment
Share on other sites

Link to post
Share on other sites

They cant. because we are stuck with x86. and we will stay there until atleast Apple sees an opportunity to switch to arm fully on their desktop SKUs. then we will have powerful arm chips and an incentive to develop for them. only a matter of time till MS compiles Windows for it.

 

I guess they dont see the need for it yet? since the broadwell parts are barely making their way to the market i guess, but if they feel threatened, they will respond.

It is too costly to change an entire platform. It wont be extremly wise for apple to do this.Engineering cost is just too high..

Link to comment
Share on other sites

Link to post
Share on other sites

They cant. because we are stuck with x86. and we will stay there until atleast Apple sees an opportunity to switch to arm fully on their desktop SKUs. then we will have powerful arm chips and an incentive to develop for them. only a matter of time till MS compiles Windows for it.

 

I guess they dont see the need for it yet? since the broadwell parts are barely making their way to the market i guess, but if they feel threatened, they will respond.

 

The problem is, i dont think no way intel and amd going to license amd64 and i386 instruction set to nvidia. They hsve to use ARM along with CUDA for processing.. and it is virtually impossible to convince microsoft and apple to recreate OS for the new one. 

Also, Intel and NV were in talks to get an x86 licence, but then one of them cried about something, and the otherone hit him, and then some childish insults happened, and now they arent speaking to eachother. I predict 3 quarters before they are in good relations again. Intel needs GPU IP, and NV needs x86 and fabs (intel desperatly needs to sell another x86 licence too, since they have to keep AMD afloat to avoid monopoly charges atm)

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

It is too costly to change an entire platform. It wont be extremly wise for apple to do this.Engineering cost is just too high..

They did it a few times already. last i recall was going from PowerPC to x86 in 2006 or so

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

They did it a few times already. last i recall was going from PowerPC to x86 in 2006 or so

It was OK when the market share was very low. Aplle had quite crreped into OS share worldwide. Imagine you just changed from windows to windows RT. It will be same affect. everyone will be relucant to support. Then again it is apple. They can tell this is the new one and other forms cease to exist. Also adobe creative suite and other editing suite should work equally well with ARM. which architecturally wont be possibe. Apples main market is in this sector too.

Link to comment
Share on other sites

Link to post
Share on other sites

It was OK when the market share was very low. Aplle had quite crreped into OS share worldwide. Imagine you just changed from windows to windows RT. It will be same affect. everyone will be relucant to support. Then again it is apple. They can tell this is the new one and other forms cease to exist. Also adobe creative suite and other editing suite should work equally well with ARM. which architecturally wont be possibe. Apples main market is in this sector too.

Apple is known to do whatever they want and have the devs adapt to it ;) so i really wont be surprised when they eventually ditch x86. its only a matter of time before they do

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

They can if and only if they can get Denver going in pure ARM and being competitive again. They'd just have to convince everyone to compile to ARM again, which isn't difficult to do using Clang, but w/e.

 

Then where are the GTX 950 and below? It's not like the 5775C trails the GTX 750 by much, and that's a Maxwell card.

 

The GTX 950 exists in the form of a GTX 750 Ti. The gap between the 750 Ti and the GTX 960 is far too small to fill with another card. The 750 and Ti are both still maxwell cards, and their performance per watt is exactly where it needs to be to be a competitive entry level card. Where else could you get that performance from a GPU while still using a terrible OEM PSU that lacks a 6 pin PCIE connection? 

 

Had Nvidia made the GTX 960 as powerful as it SHOULD have been, then a GTX 950 would have been feasible. Now that we see the GTX 960 for what it is, a gimped 1080p card, we know that no GTX 950 can exist. 

 

As for the intel iGPU taking away the budget market entirely from Nvidia, i only agree somewhat. It will be a very appealing option, but we must remember that Nvidia is in bed with many developers. Their driver support for games would probably make the games themselves run smoother on even weaker Nvidia cards than what they would on Intel iGPU's. 

 

As @LukaP said, it would not be hard for Nvidia to modernize the x20, x30, and x40 lines to be slightly more competitive than they are now. Though, only a slight window exists before they run into the same problem that the "GTX 950" runs into. There is only so much room for a card to be improved upon before it directly cuts into the profits of another card. The performance per watt of the 750 and 750 Ti is already good, making the x40 faster while retaining that same maxwell efficiency might be harmful in the long run.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Also, Intel and NV were in talks to get an x86 licence, but then one of them cried about something, and the otherone hit him, and then some childish insults happened, and now they arent speaking to eachother. I predict 3 quarters before they are in good relations again. Intel needs GPU IP, and NV needs x86 and fabs (intel desperatly needs to sell another x86 licence too, since they have to keep AMD afloat to avoid monopoly charges atm)

Well, Nvidia tried to sneak around Intel with x86 emulation on the first Denver chips. That was illegal in the first place. Intel did the gracious thing by not suing and simply making Nvidia stop and ending all license talks for x86.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Well, Nvidia tried to sneak around Intel with x86 emulation on the first Denver chips. That was illegal in the first place. Intel did the gracious thing by not suing and simply making Nvidia stop and ending all license talks for x86.

Yes that would be what happened. But intel still needs that GPU IP, and i bet they will be in talks sooner rather than later

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Yes that would be what happened. But intel still needs that GPU IP, and i bet they will be in talks sooner rather than later

I dont think it is necessary. Intel reniewed the crosslicensing with AMD and now includes all IP related to graphics of AMD

Link to comment
Share on other sites

Link to post
Share on other sites

The GTX 950 exists in the form of a GTX 750 Ti. The gap between the 750 Ti and the GTX 960 is far too small to fill with another card. The 750 and Ti are both still maxwell cards, and their performance per watt is exactly where it needs to be to be a competitive entry level card. Where else could you get that performance from a GPU while still using a terrible OEM PSU that lacks a 6 pin PCIE connection? 

 

Had Nvidia made the GTX 960 as powerful as it SHOULD have been, then a GTX 950 would have been feasible. Now that we see the GTX 960 for what it is, a gimped 1080p card, we know that no GTX 950 can exist. 

 

As for the intel iGPU taking away the budget market entirely from Nvidia, i only agree somewhat. It will be a very appealing option, but we must remember that Nvidia is in bed with many developers. Their driver support for games would probably make the games themselves run smoother on even weaker Nvidia cards than what they would on Intel iGPU's. 

 

As @LukaP said, it would not be hard for Nvidia to modernize the x20, x30, and x40 lines to be slightly more competitive than they are now. Though, only a slight window exists before they run into the same problem that the "GTX 950" runs into. There is only so much room for a card to be improved upon before it directly cuts into the profits of another card. The performance per watt of the 750 and 750 Ti is already good, making the x40 faster while retaining that same maxwell efficiency might be harmful in the long run.

Intel's driver support has been steadily ramping up over the last year, and Intel's in bed with even more developers on an even deeper level. You think game studios call Microsoft when they want to extract more CPU performance? They probably call Intel and AMD directly. I'm still shocked games aren't primarily compiled with GCC or Clang and then only the Direct X portions are compiled by Microsoft Visual Compiler. The difference in performance granted by MVC and GCC/Clang when allowing optimization is stupidly huge.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Intel's driver support has been steadily ramping up over the last year, and Intel's in bed with even more developers on an even deeper level. You think game studios call Microsoft when they want to extract more CPU performance? They probably call Intel and AMD directly. I'm still shocked games aren't primarily compiled with GCC or Clang and then only the Direct X portions are compiled by Microsoft Visual Compiler. The difference in performance granted by MVC and GCC/Clang when allowing optimization is stupidly huge.

I do not know when the last time you have used intel's iGPU driver software, but it is vastly inferior to Nvidia's Control Panel. Nvidia offers far more fine-tuning than Intel, a much wider support of better AA technologies and with Nvidia Inspector, one can go even further to improve their gaming experience. Intel might be on the rise and catching up, they are still not there yet. That is why i refuse to believe that Intel is actually harming Nvidia's budget market as of this moment. Even after seeing what Iris Pro 6200 has to offer, I do not see it doing any real harm to the x40 series until they are introduced in pentiums, celerons or i3's. H87/Z97 tends to cost more than the H81 counterparts, and most people on a strict budget will opt for cheaper boards that do not support broadwell. If we are talking future, like Skylake, we need to know for certain of the lower end models will actually obtain Iris graphics. So far, at least with the mobile platforms, we are not seeing the adoption of Iris Pro on the lower end CPU's. We are seeing HD 5500 and what not.

 

If Intel can offer these high performing iGPU's on their lower budget oriented processors, i could then understand how much of an impact it would have on Nvidia. For now, one could still get a $40 H81/H87 board, a $100 core i3 and a GTX 750 Ti cheaper than what they could get a H/Z97 board and an I5 5675C.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes that would be what happened. But intel still needs that GPU IP, and i bet they will be in talks sooner rather than later

Nope, remember Intel's gunning to kill Nvidia before the close of the decade. If the Intel Knight's Landing platform takes the performance density crown from Power 8/9 (and Oracle Sparc, though it's even more niche) and Nvidia Tesla, the single biggest benefactor of Tesla sales will suddenly evaporate, and Intel's already helping convert every CUDA-based library out there to OpenMP and OpenACC to run even better on the current and upcoming Xeon Phi and MIC systems.

 

The (theoretical) best you can do in a single 4U rack with Intel right this very second in FLOPs is 8x Xeon E7 8890 V3 and 8x Tesla K80 

Intel Xeon V3<F>*           (18 * 8 * 2 * 3.00 * 10^9)      = 0.864 TFlops SP,  0.432 TFlops DP

IBM Power 8<F>              (12 * 8 * 2 * 4.02 * 10^9)      = 0.772 TFlops SP,  0.386 TFlops DP

Nvidia Tesla K80            (4992 * 2 * 8776 * 10^6)        = 8.746 TFlops SP,  2.915 TFlops DP

Intel Knight's Landing<E>** (60 * 32 * 3 * 1.04 * 10^9)     = 5.990 TFlops SP,  2.995 TFlops DP

Intel Knight's Landing<F>** (72 * 32 * 3 * 1.04 * 10^9)     = 7.188 TFlops SP,  3.594 TFlops DP

 

<E> = entry

<F> = flagship

 

*The 8890 has a 20W higher TDP relative to the E5 2699 V3, and the 2699 topped out at 2.45GHz on AIDA64 and 2.88 on RealBench. This estimate is quite possibly on the high side of what is theoretically possible for the 8890V3

 

**I guessed at the clock speeds based on an Intel Knight's Landing announcement saying the entire lineup was 6TFlops+. There are two ways Intel could have achieved this: completion of 2nd-generation FMA with an additional multiplication, or a ~1500MHz clock rate (in a 300W TDP that's far less believable to me even with 14nm tech) an only 2 operations per cycle. These particular figures are to be taken with a large grain of salt until more is officially unveiled.

 

Power 8 systems currently top out at 16 sockets before having to add a new node to the system, which means tighter integration, less communication latency, and better performance density. Intel's top off at 8. Both of these configurations can fit in a single 4U rack space (2 boards) with up to 8 PCIe accelerators. 

 

IBM Performance Density*** 16 * 0.772 + 8 * 8.746 = 82.32 TFlops SP      16 * 0.386 + 8 * 2.915 = 29.496 TFlops DP

Intel Performance Density   8 * 0.864 + 8 * 8.746 = 76.88 TFlops SP       8 * 0.432 + 8 * 2.915 = 26.776 TFlops DP

***This excludes any integrated accelerators such as Altera/Xilinx FPGAs or even a smaller Nvidia Tesla iGPU which has been found in some systems.

 

We know Intel is aiming for 24 cores with Broadwell E5/E7 which will launch late 2015 or early 2016, which at 3 GHz means 1.152 TFlops SP/0.576 TFlops DP meaning density of 79.184 TFlops SP and 27.928 TFlops DP which still won't take the density crown in theory.

 

Power 9 and Skylake Xeons are both supposed to move to 512-bit SIMD or 16-wide vector calculations. We also know Intel is introducing the Omni-Scale fabric to its systems, which I'm guessing will allow scaling beyond 8 sockets in a single node. We know the Skylake E7 will top out at 28 cores, clocks currently unknown. We know Intel is going fully custom on its E5 and E7 Xeons, meaning very tight integration of HMC, Altera FPGAs, the Cannonlake Graphics Platform, or perhaps another accelerator. We also know Intel is introducing both the Knight's Landing onboard package and Xeon Phi Accelerator. If the socket density stays at 8, we're looking at a 16*6+ or 96/48+ TFlops density for a pure KNL setup, or 8*(6|3) + 8 * (8.746|2.915)  =  117.888 | 47.32 TFlops for a KNL and Tesla K80 system (damn close, giving SP benefits to Nvidia still). If the socket count rises to 16, we're looking at 24*6 = 144|72 TFlops+ performance density. It'll be interesting to see if Intel can stay ahead of IBM's Power 9 systems which supposedly integrate NVLink and NVLink 2.0 which solve many of the I/O bottlenecks seen in x86 platforms currently. If Intel releases PCIe 4.0 on the Skylake EP/EX platforms (as it should if the development teams are remotely intelligent), then it might very well be able to knock IBM out in the next four years. If it waits until Cannonlake, we'll be looking past the end of the decade, which leaves the wildcard of AMD dying or not.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I do not know when the last time you have used intel's iGPU driver software, but it is vastly inferior to Nvidia's Control Panel. Nvidia offers far more fine-tuning than Intel, a much wider support of better AA technologies and with Nvidia Inspector, one can go even further to improve their gaming experience. Intel might be on the rise and catching up, they are still not there yet. That is why i refuse to believe that Intel is actually harming Nvidia's budget market as of this moment. Even after seeing what Iris Pro 6200 has to offer, I do not see it doing any real harm to the x40 series until they are introduced in pentiums, celerons or i3's. H87/Z97 tends to cost more than the H81 counterparts, and most people on a strict budget will opt for cheaper boards that do not support broadwell. If we are talking future, like Skylake, we need to know for certain of the lower end models will actually obtain Iris graphics. So far, at least with the mobile platforms, we are not seeing the adoption of Iris Pro on the lower end CPU's. We are seeing HD 5500 and what not.

 

If Intel can offer these high performing iGPU's on their lower budget oriented processors, i could then understand how much of an impact it would have on Nvidia. For now, one could still get a $40 H81/H87 board, a $100 core i3 and a GTX 750 Ti cheaper than what they could get a H/Z97 board and an I5 5675C.

Geforce Experience is bloatware. And actually Intel has patents on the most efficient (performance/resources) AA algorithms on the planet, primarily because estimation of non-euclidian distances has been one of the biggest areas of mathematical and scientific computing study. That said, no one yet needs such extra features for Intel's integrated graphics yet, and nor does Intel have the fundamental hardware to make use of it all. That's why Intel's aiming to knock IBM out of competition and thus Nvidia in the HPC market. Intel needs unfettered patent access to do what it needs to, and Nvidia retracted a number of critical patents from its license when Larrabee was debuted. Despite Larrabee's scaling issues which Intel found disappointing, Nvidia found it frightening, and rightfully so as Knight's Landing is about to prove. Nvidia can be beaten in compute by x86, even if that comes more from a versatility and ease of use perspective than it does from a raw performance in embarrassingly parallel tasks. With Knight's Landing Intel takes the DP compute crown from the Tesla K80 as you can read in my post above.

 

Intel will put its resources to use where they are going to provide the most benefit. Sure, Intel doesn't have a fancy graphics driver tuning GUI, but do you honestly need one for Intel's integrated graphics? Do you really need one for AMD's integrated graphics right now? No. It will come when it's needed/demanded, not when the loudmouthed, arrogant enthusiasts such as yourself bitch about it purely out of fan loyalty and not out of rational reasoning.

 

A 5675C is $220, which is less than a Pentium or an I3 and a GTX 740. Need I say more? Nvidia is on the edge of losing the entire low end when that core count jumps 50% at the same or higher clocks with a tweaked Gen 8.5 architecture for Skylake. Intel has made it worth dumping one's GPU budget onto the CPU simply because the CPU now contains a great option for the low to low-mid graphics needs. When AMD adds HBM to its APUs (why it didn't add eDRAM to the Carrizo chips is absolutely beyond my ability to comprehend given the performance benefits to both the CPU and the iGPU it gave to Intel), and when Intel uses HMC or HBM on its own for Cannonlake or its successor, there will be no room left for Nvidia to stand on except the high end. 

 

Also, you lie. I just queried the pcpartpicker database. There's exactly 2 combinations of current-gen Intel CPU, Motherboard, and a GTX 750 (non-TI even) that is cheaper than the cheapest H97/Z97 (some H87/Z87 do support broadwell anyway with a BIOS update) and 5675C. Get the 750TI in there and it would be dead-even for one combo and higher price by $2 for the other. At that point having the I5 is better anyway just because of multitasking, even in single-threaded performance needed for gaming. You've completely lost it if you think a Celeron or Pentium (G3258 notwithstanding as you would still have a more expensive purchase with that and a 750 on a board capable of overclocking it) and a GTX 750 can have better performance in gaming than the 5675C, let alone productivity where multicore really becomes necessary.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Nope, remember Intel's gunning to kill Nvidia before the close of the decade. If the Intel Knight's Landing platform takes the performance density crown from Power 8/9 (and Oracle Sparc, though it's even more niche) and Nvidia Tesla, the single biggest benefactor of Tesla sales will suddenly evaporate, and Intel's already helping convert every CUDA-based library out there to OpenMP and OpenACC to run even better on the current and upcoming Xeon Phi and MIC systems.

 

The (theoretical) best you can do in a single 4U rack with Intel right this very second in FLOPs is 8x Xeon E7 8890 V3 and 8x Tesla K80 

Intel Xeon V3<F>*           (18 * 8 * 2 * 3.00 * 10^9)      = 0.864 TFlops SP,  0.432 TFlops DP

IBM Power 8<F>              (12 * 8 * 2 * 4.02 * 10^9)      = 0.772 TFlops SP,  0.386 TFlops DP

Nvidia Tesla K80            (4992 * 2 * 8776 * 10^6)        = 8.746 TFlops SP,  2.915 TFlops DP

Intel Knight's Landing<E>** (60 * 32 * 3 * 1.04 * 10^9)     = 5.990 TFlops SP,  2.995 TFlops DP

Intel Knight's Landing<F>** (72 * 32 * 3 * 1.04 * 10^9)     = 7.188 TFlops SP,  3.594 TFlops DP

 

<E> = entry

<F> = flagship

 

*The 8890 has a 20W higher TDP relative to the E5 2699 V3, and the 2699 topped out at 2.45GHz on AIDA64 and 2.88 on RealBench. This estimate is quite possibly on the high side of what is theoretically possible for the 8890V3

 

**I guessed at the clock speeds based on an Intel Knight's Landing announcement saying the entire lineup was 6TFlops+. There are two ways Intel could have achieved this: completion of 2nd-generation FMA with an additional multiplication, or a ~1500MHz clock rate (in a 300W TDP that's far less believable to me even with 14nm tech) an only 2 operations per cycle. These particular figures are to be taken with a large grain of salt until more is officially unveiled.

 

Power 8 systems currently top out at 16 sockets before having to add a new node to the system, which means tighter integration, less communication latency, and better performance density. Intel's top off at 8. Both of these configurations can fit in a single 4U rack space (2 boards) with up to 8 PCIe accelerators. 

 

IBM Performance Density*** 16 * 0.772 + 8 * 8.746 = 82.32 TFlops SP      16 * 0.386 + 8 * 2.915 = 29.496 TFlops DP

Intel Performance Density   8 * 0.864 + 8 * 8.746 = 76.88 TFlops SP       8 * 0.432 + 8 * 2.915 = 26.776 TFlops DP

***This excludes any integrated accelerators such as Altera/Xilinx FPGAs or even a smaller Nvidia Tesla iGPU which has been found in some systems.

 

We know Intel is aiming for 24 cores with Broadwell E5/E7 which will launch late 2015 or early 2016, which at 3 GHz means 1.152 TFlops SP/0.576 TFlops DP meaning density of 79.184 TFlops SP and 27.928 TFlops DP which still won't take the density crown in theory.

 

Power 9 and Skylake Xeons are both supposed to move to 512-bit SIMD or 16-wide vector calculations. We also know Intel is introducing the Omni-Scale fabric to its systems, which I'm guessing will allow scaling beyond 8 sockets in a single node. We know the Skylake E7 will top out at 28 cores, clocks currently unknown. We know Intel is going fully custom on its E5 and E7 Xeons, meaning very tight integration of HMC, Altera FPGAs, the Cannonlake Graphics Platform, or perhaps another accelerator. We also know Intel is introducing both the Knight's Landing onboard package and Xeon Phi Accelerator. If the socket density stays at 8, we're looking at a 16*6+ or 96/48+ TFlops density for a pure KNL setup, or 8*(6|3) + 8 * (8.746|2.915)  =  117.888 | 47.32 TFlops for a KNL and Tesla K80 system (damn close, giving SP benefits to Nvidia still). If the socket count rises to 16, we're looking at 24*6 = 144|72 TFlops+ performance density. It'll be interesting to see if Intel can stay ahead of IBM's Power 9 systems which supposedly integrate NVLink and NVLink 2.0 which solve many of the I/O bottlenecks seen in x86 platforms currently. If Intel releases PCIe 4.0 on the Skylake EP/EX platforms (as it should if the development teams are remotely intelligent), then it might very well be able to knock IBM out in the next four years. If it waits until Cannonlake, we'll be looking past the end of the decade, which leaves the wildcard of AMD dying or not.

 

While it is unknown at this moment in time how Zen is going to perform, but seeing as they completely removed CMT, and adopted SMT, i would like to assume its on par with current haswells. The fact that we are looking at a die shrink makes me also believe the clock speeds will be far more tame this time around. Even if Zen is only on par with haswell, and not as good as Skylake as far as IPC goes, AMD can still see a huge comeback from this alone. Especially since they are still offering 8 core configurations on a consumer chip. You and i both know how pointless 8 cores is for most consumers, knowing that no developer cares enough to actually utilize that many threads effectively, it still sells simply do to bragging rights alone. If they can make the 8 core Zen's perform on par with a hyperthreaded i7 haswell, it will be nice. The 95w TDP only solidifies my theory on the tame clock rates.

 

 

Geforce Experience is bloatware. And actually Intel has patents on the most efficient (performance/resources) AA algorithms on the planet, primarily because estimation of non-euclidian distances has been one of the biggest areas of mathematical and scientific computing study. That said, no one yet needs such extra features for Intel's integrated graphics yet, and nor does Intel have the fundamental hardware to make use of it all. That's why Intel's aiming to knock IBM out of competition and thus Nvidia in the HPC market. Intel needs unfettered patent access to do what it needs to, and Nvidia retracted a number of critical patents from its license when Larrabee was debuted. Despite Larrabee's scaling issues which Intel found disappointing, Nvidia found it frightening, and rightfully so as Knight's Landing is about to prove. Nvidia can be beaten in compute by x86, even if that comes more from a versatility and ease of use perspective than it does from a raw performance in embarrassingly parallel tasks. With Knight's Landing Intel takes the DP compute crown from the Tesla K80 as you can read in my post above.

 

Intel will put its resources to use where they are going to provide the most benefit. Sure, Intel doesn't have a fancy graphics driver tuning GUI, but do you honestly need one for Intel's integrated graphics? Do you really need one for AMD's integrated graphics right now? No. It will come when it's needed/demanded, not when the loudmouthed, arrogant enthusiasts such as yourself bitch about it purely out of fan loyalty and not out of rational reasoning.

 

A 5675C is $220, which is less than a Pentium or an I3 and a GTX 740. Need I say more? Nvidia is on the edge of losing the entire low end when that core count jumps 50% at the same or higher clocks with a tweaked Gen 8.5 architecture for Skylake. Intel has made it worth dumping one's GPU budget onto the CPU simply because the CPU now contains a great option for the low to low-mid graphics needs. When AMD adds HBM to its APUs (why it didn't add eDRAM to the Carrizo chips is absolutely beyond my ability to comprehend given the performance benefits to both the CPU and the iGPU it gave to Intel), and when Intel uses HMC or HBM on its own for Cannonlake or its successor, there will be no room left for Nvidia to stand on except the high end. 

 

Who ever mentioned GFE? GFE has little to do with the driver itself, or the control panel. GFE is a hub for "ease of use" garbage and shield features. As for intel's highly advanced AA algorithms, find me games that actually utilize them. We see plenty of FXAA and MSAA titles on the market already. Hell, TXAA is starting to finally catch on too. 

 

What i bolded out of your statement, i must say, i did not expect of you. You label me a fanboy for merely pointing out what should be seen by every single person as a fact at this point in time. Never once did i say you were wrong, i simply said that intel's iGPU is not mature enough in its current state to accomplish what you say it has. I am not a fanboy. I treat all hardware as tools, not as a brand of clothing. I get the best performance for my dollar at all times, with a few exceptions depending on my goal for a particular build. The only arrogant person in this thread thus far is yourself. You treat Intel as if it is is a godlike entity that walks amongst mortals. You attempt to intimidate people with false intelligence on things you could not possibly know, and you make far more assumptions than factual statements. I am no expert in this field, but unlike yourself, i do not pretend to be. 

 

Your last statement is also incorrect. The 5675C is $276, not $220. I3's are often found for $100, and you can get a brand new GTX 750 Ti 2GB for $120 almost everywhere. One could grab a cheap H81 board for $40, and still have less invested in all 3 components than a single CPU. 

 

PCPartPicker part list: http://pcpartpicker.com/p/78yVQ7
Price breakdown by merchant: http://pcpartpicker.com/p/78yVQ7/by_merchant/
 
CPU: Intel Core i3-4160 3.6GHz Dual-Core Processor  ($108.99 @ NCIX US) 
Motherboard: Gigabyte GA-H81M-S2H Micro ATX LGA1150 Motherboard  ($39.99 @ SuperBiiz) 
Video Card: Zotac GeForce GTX 750 Ti 2GB Video Card  ($129.99 @ SuperBiiz) 
Total: $278.97
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2015-06-15 17:01 EDT-0400
 
To answer your question, no. You need not say more. It is best to refrain from speaking and look like an idiot, rather than open your mouth and remove all doubt. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

While it is unknown at this moment in time how Zen is going to perform, but seeing as they completely removed CMT, and adopted SMT, i would like to assume its on par with current haswells. The fact that we are looking at a die shrink makes me also believe the clock speeds will be far more tame this time around. Even if Zen is only on par with haswell, and not as good as Skylake as far as IPC goes, AMD can still see a huge comeback from this alone. Especially since they are still offering 8 core configurations on a consumer chip. You and i both know how pointless 8 cores is for most consumers, knowing that no developer cares enough to actually utilize that many threads effectively, it still sells simply do to bragging rights alone. If they can make the 8 core Zen's perform on par with a hyperthreaded i7 haswell, it will be nice. The 95w TDP only solidifies my theory on the tame clock rates.

 

 

 

Who ever mentioned GFE? GFE has little to do with the driver itself, or the control panel. GFE is a hub for "ease of use" garbage and shield features. As for intel's highly advanced AA algorithms, find me games that actually utilize them. We see plenty of FXAA and MSAA titles on the market already. Hell, TXAA is starting to finally catch on too. 

 

What i bolded out of your statement, i must say, i did not expect of you. You label me a fanboy for merely pointing out what should be seen by every single person as a fact at this point in time. Never once did i say you were wrong, i simply said that intel's iGPU is not mature enough in its current state to accomplish what you say it has. I am not a fanboy. I treat all hardware as tools, not as a brand of clothing. I get the best performance for my dollar at all times, with a few exceptions depending on my goal for a particular build. The only arrogant person in this thread thus far is yourself. You treat Intel as if it is is a godlike entity that walks amongst mortals. You attempt to intimidate people with false intelligence on things you could not possibly know, and you make far more assumptions than factual statements. I am no expert in this field, but unlike yourself, i do not pretend to be. 

 

Your last statement is also incorrect. The 5675C is $276, not $220. I3's are often found for $100, and you can get a brand new GTX 750 Ti 2GB for $120 almost everywhere. One could grab a cheap H81 board for $40, and still have less invested in all 3 components than a single CPU. 

 

PCPartPicker part list: http://pcpartpicker.com/p/78yVQ7
Price breakdown by merchant: http://pcpartpicker.com/p/78yVQ7/by_merchant/
 
CPU: Intel Core i3-4160 3.6GHz Dual-Core Processor  ($108.99 @ NCIX US) 
Motherboard: Gigabyte GA-H81M-S2H Micro ATX LGA1150 Motherboard  ($39.99 @ SuperBiiz) 
Video Card: Zotac GeForce GTX 750 Ti 2GB Video Card  ($129.99 @ SuperBiiz) 
Total: $278.97
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2015-06-15 17:01 EDT-0400
 
To answer your question, no. You need not say more. It is best to refrain from speaking and look like an idiot, rather than open your mouth and remove all doubt. 

 

The Control Panel has to be installed with GFE. Separating the two is a royal pain.

 

The release price of the 5675C and 5775C are both unconfirmed for the moment, but to assume they'll depart so much from the 4790K and 4690K is asinine. I'd easily place it at 220.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

This thread is golden literally.

  ﷲ   Muslim Member  ﷲ

KennyS and ScreaM are my role models in CSGO.

CPU: i3-4130 Motherboard: Gigabyte H81M-S2PH RAM: 8GB Kingston hyperx fury HDD: WD caviar black 1TB GPU: MSI 750TI twin frozr II Case: Aerocool Xpredator X3 PSU: Corsair RM650

Link to comment
Share on other sites

Link to post
Share on other sites

The Control Panel has to be installed with GFE. Separating the two is a royal pain.

 

The release price of the 5675C and 5775C are both unconfirmed for the moment, but to assume they'll depart so much from the 4790K and 4690K is asinine.

 

http://ark.intel.com/products/88095/Intel-Core-i5-5675C-Processor-4M-Cache-up-to-3_60-GHz

 

Not unconfirmed. It was stated over 12 days ago, and it is on par with the previous releases of i5's and i7's pricing cycle. The i7 also costs exactly $100 more, which is also the norm. HTT generally has a $100 price point on consumer chips, unless we compare pentium to i3. You still neglected the bulk of my statement, i am very curious to see your response to what i had to say. After all, you did insult me by treating me like some sub-human fanboy. 

 

EDIT: I forgot to add, you can install Nvidia drivers without GFE. Whoever told you that you could not, is incorrect.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I got a radical idea (well, not really, it's just one that makes more sense); let's not fire the people working for you, let's fire some stock owner motherfuckers and by fire I mean some high caliber handguns right at their nuts. "Gotta make more profit every year, I don't care who you've acquired, it's less money, so it's bad..."

Without the stock holders (investors) Intel has far less money and far less value to the world. If the stockholders made a concerted effort to simply dump Intel's shares for $5 a piece on the market, Intel would evaporate overnight the way its dividends and required share purchasing margins work.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

http://ark.intel.com/products/88095/Intel-Core-i5-5675C-Processor-4M-Cache-up-to-3_60-GHz

 

Not unconfirmed. It was stated over 12 days ago, and it is on par with the previous releases of i5's and i7's pricing cycle. The i7 also costs exactly $100 more, which is also the norm. HTT generally has a $100 price point on consumer chips, unless we compare pentium to i3. You still neglected the bulk of my statement, i am very curious to see your response to what i had to say. After all, you did insult me by treating me like some sub-human fanboy. 

MSRP is always higher than market launch price by 30-$50. One needs only look through history to see that. The only time MSRP is accurate is for E5/E7 Xeons.

 

You are being irrational and demanding a feature that no one actually needs and would offer no value to Intel's sales pitch and thus no extra revenues while adding costs (Intel's drivers themselves have been entering shorter development cycles and released more often, and the bug reports have dried up compared to what used to be reported on a weekly basis unless there are twits trying to use features Intel's graphics do not support, something that can be very quickly looked up. Intel builds what the market demands. It's done that with awe-inducing speed ever since the athlon 64 days and the launch of Core 2. You giving them flack over it is proof you have 0 understanding of the market or customer needs. If even a large minority of the market wanted it, it would be made and released within a week.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

http://ark.intel.com/products/88095/Intel-Core-i5-5675C-Processor-4M-Cache-up-to-3_60-GHz

 

Not unconfirmed. It was stated over 12 days ago, and it is on par with the previous releases of i5's and i7's pricing cycle. The i7 also costs exactly $100 more, which is also the norm. HTT generally has a $100 price point on consumer chips, unless we compare pentium to i3. You still neglected the bulk of my statement, i am very curious to see your response to what i had to say. After all, you did insult me by treating me like some sub-human fanboy. 

 

EDIT: I forgot to add, you can install Nvidia drivers without GFE. Whoever told you that you could not, is incorrect.

I never said you couldn't install drivers without GFE. I said you couldn't install the control panel without it, and while it's been a number of years since I've had to rebuild a windows system, the last time I had to grab it required I get GFE and then very carefully scalpel away GFE and burn it while leaving some phantom registry keys and such for the control panel to not bug out. If that has changed recently, good for Nvidia.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×