Jump to content

AMD's Big HPC Swing: Exascale Heterogeneous Processor [Updated]

patrickjp93

No it wouldn't. When you gain monopoly by raw competition, you're legal and free to do as you please as long as you don't abuse that position.

uhm, nope.

 

If AMD is gone, then there IS no competition. Even if Nvidia had been given the X86-64 license from AMD, they have no expertice, nor ability to produce. Thus they are just a license holder, not an actual market competitor on any front.

 

This means there is only intel in the x86 segment left. Nobody else has the licenses to produce.

 

The whole thing is based upon LICENSES. The holders, AMD (for 64bit) and Intel (for 32bit) together holds the ENTIRE right to produce ANY processing equipment with these instruction sets.

Since these sets are industry standards, the only way for intel to NOT be a monopoly is to either grant x86 licenses to other companies which is WILLING to enter the market, of which there isnt many. IBM seem to only care about their enterprise level stuff. Nvidia isnt even suited to deal with intel in the first place mainly due to their lack of expertice building x86 CPUs. Qualcomm would be mad trying to compete with intel as they got their hands full with the ARM mobile market.

 

The only player big enough and experienced enough, or just rich enough to buy the experience, that has their own fabs, is Samsung. And even if Samsung got a x86 license, i truly do not believe they would make any notable impact in the market for a few generations. Thus intel would reign alone for perhaps a decade or so, if AMD is gone and nobody gets the blueprints for their upcoming designs.

 

AMD dieing is THE WORST that can happen. Not only for us, but for the industry. Also note that Intel needs AMD to use the 64bit instruction set. If they arent allowed to use it due to new license holders, then we will be forced to go back to 32bit systems while intel forces through their original 64bit instruction set, which is not backwards compatible with their own 32bit system. Net result is havoc on the software side too, as you would have to specifically write for either 32 or 64bit, and the software would be hardware limited rather then OS limited.

Link to comment
Share on other sites

Link to post
Share on other sites

uhm, nope.

 

If AMD is gone, then there IS no competition. Even if Nvidia had been given the X86-64 license from AMD, they have no expertice, nor ability to produce. Thus they are just a license holder, not an actual market competitor on any front.

 

This means there is only intel in the x86 segment left. Nobody else has the licenses to produce.

 

The whole thing is based upon LICENSES. The holders, AMD (for 64bit) and Intel (for 32bit) together holds the ENTIRE right to produce ANY processing equipment with these instruction sets.

Since these sets are industry standards, the only way for intel to NOT be a monopoly is to either grant x86 licenses to other companies which is WILLING to enter the market, of which there isnt many. IBM seem to only care about their enterprise level stuff. Nvidia isnt even suited to deal with intel in the first place mainly due to their lack of expertice building x86 CPUs. Qualcomm would be mad trying to compete with intel as they got their hands full with the ARM mobile market.

 

The only player big enough and experienced enough, or just rich enough to buy the experience, that has their own fabs, is Samsung. And even if Samsung got a x86 license, i truly do not believe they would make any notable impact in the market for a few generations. Thus intel would reign alone for perhaps a decade or so, if AMD is gone and nobody gets the blueprints for their upcoming designs.

 

AMD dieing is THE WORST that can happen. Not only for us, but for the industry. Also note that Intel needs AMD to use the 64bit instruction set. If they arent allowed to use it due to new license holders, then we will be forced to go back to 32bit systems while intel forces through their original 64bit instruction set, which is not backwards compatible with their own 32bit system. Net result is havoc on the software side too, as you would have to specifically write for either 32 or 64bit, and the software would be hardware limited rather then OS limited.

I think you're forgetting Nvidia's Tegra chip lines which are being used in tablets and cars today, with similar IPC to Nehalem. Nvidia is a potential competitor, and they have expressed interest in x86 before. Intel turned them down.

 

Intel could get around 86_64 more easily than you'd think. Creating an Itanium 2.0 and deploying microcode changes is actually very simple. And Intel has the programming teams to help compilers rapidly adjust.

 

You have little understanding of this landscape. Nvidia is the only one equipped to handle Intel across mobile, desktop, and HPC. Samsung doesn't come close, as its fabs are only good for mobile processors.

 

And you don't need experience with x86 to build a good x86 CPU. Nvidia proved the same with ARM in Denver. Barely any experience led to the best ARM processor in existence by far until Apple added a 3rd core in their A8X. If you think the same thing can't be done in x86 when ARM stopped being real RISC back in V6, you don't have a clue. It's all applicable.

 

And again you're wrong with the software compatibility both due to currently selling chips and the fact Intel could turn that situation around practically overnight.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

If it's running ARM or PowerPC then no, because Crysis is an x86 application :)

Simple solution to that: write the game in OpenGL and use GCC to compile for PPC or Clang for ARM.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I think you're forgetting Nvidia's Tegra chips lines which are being used in tablets and cars today, with similar IPC to Nehalem. Nvidia is a potential competitor, and they have expressed interest in x86 before. Intel turned them down.

 

Intel could get around 86_64 more easily than you'd think. Creating an Itanium 2.0 and deploying microcode changes is actually very simple. And Intel has the programming teams to help compilers rapidly adjust.

 

You have little understanding of this landscape. Nvidia is the only one equipped to handle Intel across mobile, desktop, and HPC. Samsung doesn't come close, as its fabs are only good for mobile processors.

 

And you don't need experience with x86 to build a good x86 CPU. Nvidia proved the same with ARM in Denver. Barely any experience led to the best ARM processor in existence by far until Apple added a 3rd core in their A8X. If you think the same thing can't be done in x86 when ARM stopped being real RISC back in V6, you don't have a clue. It's all applicable.

 

And again you're wrong with the software compatibility both due to currently selling chips and the fact Intel could turn that situation around practically overnight.

Tegra is ARM.... And to be honest, their tablets are onyl niche products. If you want a company with REAL ARM competence, look towards Samsung or Qualcomm. Both would demolish Nvidia even in pure ARM CPU power.

 

ARM =/=X86 or X86-64

 

The way they work, the way they communicate, its not the same. ARM doesnt multitask, they run scheduled tasks sequentially based upon a timing schedule...

 

Just because you can build a CPU to power a tablet, doesnt mean you have the competence to scale things up, nor to deal with the difficulties of X86...

Link to comment
Share on other sites

Link to post
Share on other sites

Tegra is ARM.... And to be honest, their tablets are onyl niche products. If you want a company with REAL ARM competence, look towards Samsung or Qualcomm. Both would demolish Nvidia even in pure ARM CPU power.

 

ARM =/=X86 or X86-64

 

The way they work, the way they communicate, its not the same. ARM doesnt multitask, they run scheduled tasks sequentially based upon a timing schedule...

 

Just because you can build a CPU to power a tablet, doesnt mean you have the competence to scale things up, nor to deal with the difficulties of X86...

Denver tegra actually is platform-agnostic. It's an emulation device which uses the long-hated and never-successful VLIW (Very Long Instruction Word) concept. It translates given instructions into its own native ones after running a long-form parallelism analysis on the code, both in terms of threads and in terms of data/control hazards. It's a rather brilliant design, and in fact Nvidia's first Denver K1 emulated x86, and Intel brought down the hammer on it, because the truth is with Denver locked up in patents, it represents a huge danger to Intel if Nvidia lets IBM in on it.

 

And no, the Denver K1 tegra still stomps out all current ARM chips except the 3-core A8X. Benchmarks across the board confirm this. Samsung's custom architecture in the new Exynos is not remotely impressive. Perhaps AMD's K12 will be, but that's up in the air. Samsung is still a small-time chip designer like Qualcomm. They won't be catching up to Apple or Nvidia for many years to come. 

 

Incorrect. ARM went with out of order processing years ago. It can in fact multitask.

 

Yes, it does, when that starting chip is the most advanced ARM design on the planet and can keep up with Nehalem. Scaling up clock rates and core counts is easy until about 4GHz.

 

And Nvidia's chip designs aren't niche, or at the very least won't be for much longer. Samsung's currently in talks to use a Denver variant in their next tablet line.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Denver tegra actually is platform-agnostic. It's an emulation device which uses the long-hated and never-successful VLIW (Very Long Instruction Word) concept. It translates given instructions into its own native ones after running a long-form parallelism analysis on the code, both in terms of threads and in terms of data/control hazards. It's a rather brilliant design, and in fact Nvidia's first Denver K1 emulated x86, and Intel brought down the hammer on it, because the truth is with Denver locked up in patents, it represents a huge danger to Intel if Nvidia lets IBM in on it.

 

And no, the Denver K1 tegra still stomps out all current ARM chips except the 3-core A8X. Benchmarks across the board confirm this. Samsung's custom architecture in the new Exynos is not remotely impressive. Perhaps AMD's K12 will be, but that's up in the air. Samsung is still a small-time chip designer like Qualcomm. They won't be catching up to Apple or Nvidia for many years to come. 

 

Incorrect. ARM went with out of order processing years ago. It can in fact multitask.

 

Yes, it does, when that starting chip is the most advanced ARM design on the planet and can keep up with Nehalem. Scaling up clock rates and core counts is easy until about 4GHz.

 

And Nvidia's chip designs aren't niche, or at the very least won't be for much longer. Samsung's currently in talks to use a Denver variant in their next tablet line.

Uhm... Samsung small?

 

They make the chips for the latest gen Iphones, they are one of the giants in ICs in general, there is litterally NOTHING that company doesnt have a part in (sadly)... While i do not like Samsung that much, i do recognize that the company is way ahead of most others in some sections.

If Apple could produce better stuff themselves, why did they ever bother to outsource parts for the Iphone 6?

 

And if Intel brought the hammer down in the Denver chip in the first place, how do you think Samsung will benefit much from it if its sole "super power" was to be IS agnostic?.... Sounds more like a wild speculation then anything....

 

There is litterally nothign to compete with intel, if intel want to compete. Atm, intel has no reason to compete so they are just dozing off. They have no threats, because they know only one competitor CAN compete, from time to time, and that competitor is preoccupied paying bills and finding out how to make money.

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

There's an important difference between designing a chip and manufacturing it.

Also, the majority of the A8 chips and all the A8Xs are manufactured by TSMC. And the mere fact that a company operates in many branches doesn't imply that they excel in any of them.

Link to comment
Share on other sites

Link to post
Share on other sites

There's an important difference between designing a chip and manufacturing it.

Also, the majority of the A8 chips and all the A8Xs are manufactured by TSMC. And the mere fact that a company operates in many branches doesn't imply that they excel in any of them.

Didnt samsung have their own fabs? or was that only for NAND and DRAM?

Link to comment
Share on other sites

Link to post
Share on other sites

Uhm... Samsung small?

 

They make the chips for the latest gen Iphones, they are one of the giants in ICs in general, there is litterally NOTHING that company doesnt have a part in (sadly)... While i do not like Samsung that much, i do recognize that the company is way ahead of most others in some sections.

If Apple could produce better stuff themselves, why did they ever bother to outsource parts for the Iphone 6?

 

And if Intel brought the hammer down in the Denver chip in the first place, how do you think Samsung will benefit much from it if its sole "super power" was to be IS agnostic?.... Sounds more like a wild speculation then anything....

 

There is litterally nothign to compete with intel, if intel want to compete. Atm, intel has no reason to compete so they are just dozing off. They have no threats, because they know only one competitor CAN compete, from time to time, and that competitor is preoccupied paying bills and finding out how to make money.

Samsung's foundries only target a single area: low power. They're not remotely equipped or experienced in moderate or high power or performance scenarios. They are an amateur among the foundries of the world, even if huge. Don't twist my words.

 

Apple doesn't outsource much, and all of the chip design is in-house, even if production has to go to a foundry. Foundry expenses would hurt Apple's bottom line, hence the growth of the fabless IC industry, of which Apple is a part.

 

Intel is not remotely dozing or complacent, and huge improvements have been made in its CPUs since the days of Nehalem, namely in the form of SIMD. The problem is software not keeping up, both because of Microsoft's extensive legacy support in its operating systems and its lousy compiler (compared to the competition in GCC, Clang, and ICC) which doesn't do well at vectorizing non-obvious code. Software companies aren't providing multiversioned code to allow both legacy support and the use of the best on the market at the time of deployment in order to keep executable sizes (and thus downloads and CD capacities) as small as possible. They target the oldest processor available to an OS, namely the Pentium III in Windows 7 and the Pentium 4 in Windows 10. The instructions available to those chips have been optimized within nanoscopic distances of their theoretical best. http://www.agner.org/optimize/instruction_tables.pdf (Page 186 for Haswell's instruction latencies). The problem is not Intel, and Intel is also pushing clocks and performance in mobile like no other company while also driving HPC. Desktop is out of its control. That control has fallen mostly into Microsoft's hands, and while Intel has done all the work, there's little to show for it because of Microsoft's distortion of software evolution.

 

Nvidia could compete with Intel in CPUs if it had AMD's IP and the access to the x86_64 license. Intel could easily compete with Nvidia in dGPUs if it gained access to AMD/ATI's GPU IP. The problem for both of these companies is patent trolling on the part of the other. Don't lie to yourself. Nvidia is the only one who can compete with Intel in chip design for the consumer space. Samsung, Qualcomm, Mediatek, Rockchip, and Apple can't. IBM and Oracle could, but they have no interest in the consumer market. Samsung may be a huge foundry, but it's an amateur of a foundry with only one node flavor. It's also an amateur in chip design, something that shows in its new Exynos and custom ARM architecture barely making improvements over the vanilla cores.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I find it as a good thing AMD is still trying to stay above the water.

8000$ a chip... thats a good deal. 32gb's of cache? Thats insane.

I can buy it if the lords of volatility smile upon me.

Like watching Anime? Consider joining the unofficial LTT Anime Club Heaven Society~ ^.^

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

Take note everyone. This is what a decent news post looks like. A reasonable summary of the article, no flaming, some technical background, and updates as new information becomes available.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Take note everyone. This is what a decent news post looks like. A reasonable summary of the article, no flaming, some technical background, and updates as new information becomes available.

Hehehe, flattery will only get you so far with me, but thank you.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

AMD x86 16-core Heterogenous EHP Processor revealed

This is going to be interesting! :-)
For me who is such an AMD fan, that recently bought an 4790k also.... That 16 Core AMD CPU will be my next chip! :-D

Article: Fudzilla

fudzillaAMD16Zen.jpg

 

http://www.fudzilla.com/news/processors/38380-amd-x86-16-core-heterogenous-ehp-processor-detailed?cid=dlvr.it

Link to comment
Share on other sites

Link to post
Share on other sites

The question here is, do we need 32 threads?...

I'm just curious how well the iGPU is going to perform.

 

And how much it's going to heat up (jk)

 

Edit: Hold up, x86 not x64? WUT?

I produce music!


Link to comment
Share on other sites

Link to post
Share on other sites

AMD x86 16-core Heterogenous EHP Processor revealed

This is going to be interesting! :-)

For me who is such an AMD fan, that recently bought an 4790k also.... That 16 Core AMD CPU will be my next chip! :-D

Article: Fudzilla

fudzillaAMD16Zen.jpg

 

http://www.fudzilla.com/news/processors/38380-amd-x86-16-core-heterogenous-ehp-processor-detailed?cid=dlvr.it

I thought this was revealed a while back 

Current Rig:   CPU: AMD 1950X @4Ghz. Cooler: Enermax Liqtech TR4 360. Motherboard:Asus Zenith Extreme. RAM: 8GB Crucial DDR4 3666. GPU: Reference GTX 970  SSD: 250GB Samsung 970 EVO.  HDD: Seagate Barracuda 7200.14 2TB. Case: Phanteks Enthoo Pro. PSU: Corsair RM1000X. OS: Windows 10 Pro UEFI mode  (installed on SSD)

Peripherals:  Display: Acer XB272 1080p 240Hz G Sync Keyboard: Corsair K95 RGB Brown Mouse: Logitech G502 RGB Headhet: Roccat XTD 5.1 analogue

Daily Devices:Sony Xperia XZ1 Compact and 128GB iPad Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Give me 1 reason why wouldn't you recommend this to someone who's going to build a PC in the near future. I bet you'd go for Intel .. impressive AMD, but Intel is still leading.

Security Analyst & Tech Enthusiast

Ask me anything.

Link to comment
Share on other sites

Link to post
Share on other sites

IPC and single core performance is going to make all the difference.

Most apps should take advantage of 4 to 8 cores but 16 core 32 thread im not so sure about that.

Link to comment
Share on other sites

Link to post
Share on other sites

Give me 1 reason why wouldn't you recommend this to someone who's going to build a PC in the near future. I bet you'd go for Intel .. impressive AMD, but Intel is still leading.

 

Easy: you shouldn't be planning a future build so much far in the future... fuck even one month can make a huge difference in prices.

 

It has nothing to do with performance, or intel vs amd... it's just good sense. This CPU is due in 2016.

Link to comment
Share on other sites

Link to post
Share on other sites

I am afraid AMD will just make the same mistake and make a CPU with a fuckton of cores with bad IPC... I hope not.

MacBook Pro 15' 2018 (Pretty much the only system I use)

Link to comment
Share on other sites

Link to post
Share on other sites

Give me 1 reason why wouldn't you recommend this to someone who's going to build a PC in the near future.

Because nothing benefits from 32 threads, lmao.

.

Link to comment
Share on other sites

Link to post
Share on other sites

amd you don't need to put integrated graphics in a chip with 16 cores

Location: Kaunas, Lithuania, Europe, Earth, Solar System, Local Interstellar Cloud, Local Bubble, Gould Belt, Orion Arm, Milky Way, Milky Way subgroup, Local Group, Virgo Supercluster, Laniakea, Pisces–Cetus Supercluster Complex, Observable universe, Universe.

Spoiler

12700, B660M Mortar DDR4, 32GB 3200C16 Viper Steel, 2TB SN570, EVGA Supernova G6 850W, be quiet! 500FX, EVGA 3070Ti FTW3 Ultra.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Are they still crappy combined CPU cores?

i5 4670k @ 4.2GHz (Coolermaster Hyper 212 Evo); ASrock Z87 EXTREME4; 8GB Kingston HyperX Beast DDR3 RAM @ 2133MHz; Asus DirectCU GTX 560; Super Flower Golden King 550 Platinum PSU;1TB Seagate Barracuda;Corsair 200r case. 

Link to comment
Share on other sites

Link to post
Share on other sites

amd you don't need to put integrated graphics in a chip with 16 cores

Even Intel is going ot put iGPU into Xeons for a good reasons... I cant see why you wouldnt want CPU with iGPU in it.

Link to comment
Share on other sites

Link to post
Share on other sites

The question here is, do we need 32 threads?...

I'm just curious how well the iGPU is going to perform.

 

And how much it's going to heat up (jk)

 

Edit: Hold up, x86 not x64? WUT?

 

It's x64, This is most likely a high end server chip, where those 32 cores are needed, or even in Data Centers where Virtualization is pretty standard now (without the iGPU), depending on the situation of computing it'll perform great.

 

Give me 1 reason why wouldn't you recommend this to someone who's going to build a PC in the near future. I bet you'd go for Intel .. impressive AMD, but Intel is still leading.

 

This isn't for those people, the chip is going to cost a few thousand and compete nicely with Intel Higher Price Bracket. They want more market share in that 14billion dollar market.

 

IPC and single core performance is going to make all the difference.

Most apps should take advantage of 4 to 8 cores but 16 core 32 thread im not so sure about that.

This is again for the server space, when you have racks fulled with servers I don't think the difference between Ivy Bridge and Skylake is going to be the biggest thing.

 

Are they still crappy combined CPU cores?

zen.jpg

 

Nope

 

Because nothing benefits from 32 threads, lmao.

 

Virtualization but then again the iGPU isn't going to play much of a role in a current Data Center. 

 

amd you don't need to put integrated graphics in a chip with 16 cores

Well depends on the situation, this isn't a consumer line up, there will be chips without integrated graphics. I don't think any consumer lineup needs 1-2tb of ram. 

Computing enthusiast. 
I use to be able to input a cheat code now I've got to input a credit card - Total Biscuit
 

Link to comment
Share on other sites

Link to post
Share on other sites

amd you don't need to put integrated graphics in a chip with 16 cores

They don't need it, put with the HDM, who knows.

Need to wait for it to release and see the benchmarks for that chip.

 

IPC and single core performance is going to make all the difference.

Most apps should take advantage of 4 to 8 cores but 16 core 32 thread im not so sure about that.

True, but now that we have got Windows 10, that OS favours more cores etc.

So now we wait and see if that is for consumers or enterprises/servers.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×