Jump to content

AMD's Big HPC Swing: Exascale Heterogeneous Processor [Updated]

patrickjp93

>32 cores

I guess AMD never learns.

>Enterprise/Data-centre/Server/Supercomputer product

 

I guess LTT users never learn?

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

>Enterprise/Data-centre/Server/Supercomputer product

 

I guess LTT users never learn?

They learn more often than wccf users.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

They learn more often than wccf users.

Well sure, but that's like saying something learns more often then a Dodo bird. WCCFTech users aren't exactly the cream of the crop ;)

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Well sure, but that's like saying something learns more often then a Dodo bird. WCCFTech users aren't exactly the cream of the crop ;)

And let's not forget that WCCF tech articles get posted all the time here. 

Link to comment
Share on other sites

Link to post
Share on other sites

And let's not forget that WCCF tech articles get posted all the time here. 

I have no issues with WCCF articles. They are exactly what they say they are: A rumour website. They never claim to be anything different. They report all the rumours. The ones closer to launch tend to be more accurate, and they've had QUITE A FEW spot-on predictions, but they're still a rumour website.

 

As long as LTT Forums allow rumour and speculation news pieces to be allowed as part of the rules, then I will not have an issue with WCCF Tech as a source. We allow all other kinds of rumours and speculations - does it really matter if a rumour comes from the Verge vs WCCF Tech? They're equally likely to be true.

 

The users on that website however, are a whole different matter.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

800 for just 100,000 nodes, plus ECC registered RAM for each, motherboards, fault tolerance (redunant power supplies and such), Back up generators, Power straight from the grid and the infrastructure needed for that to be functional. I would say 1.5/2 billion dollars.

So basically pocket money for anyone in need if such a system?!

Link to comment
Share on other sites

Link to post
Share on other sites

I'm so moist that im swimming. Too bad that I can't even afford a wallet.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm so moist that im swimming. Too bad that I can't even afford a wallet.

This type of thing would be useless for consumers. 

Hello This is my "signature". DO YOU LIKE BORIS????? http://strawpoll.me/4669614

Link to comment
Share on other sites

Link to post
Share on other sites

If AMD gets priority access to HBM2 for both ZEN, Greenland discrete, and maybe a super computer; I wonder if there's anything left for nvidia? SK Hynix has ramped up the production, so this super computer should not be impossible to make.

 

Wut? I highly doubt retail Zen FX CPUs will use any form of HBM where latency is much more important than data bandwidth -- I will be very surprised if Zen uses anything other than standard DDR4 DIMMs. Secondly this is going to exist in five years' time. Why you think that should interfere with Nvidia's roadmap for mid 2016 is beyond me.

Link to comment
Share on other sites

Link to post
Share on other sites

Wut? I highly doubt retail Zen FX CPUs will use any form of HBM where latency is much more important than data bandwidth -- I will be very surprised if Zen uses anything other than standard DDR4 DIMMs. Secondly this is going to exist in five years' time. Why you think that should interfere with Nvidia's roadmap for mid 2016 is beyond me.

 

The HBM2 would be for the integrated GPU in ZEN APU's. Both Zen and Greenland will be released next year, so that interferes quite a bit with NVidia's roadmap. Especially since AMD has priority access to HBM2.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

The HBM2 would be for the integrated GPU in ZEN APU's. Both Zen and Greenland will be released next year, so that interferes quite a bit with NVidia's roadmap. Especially since AMD has priority access to HBM2.

The APUs are targeted for 2017, and don't forget HBM 2 can be made by any JEDEC member. Micron is the most likely to pick it up since Intel can make most of its own HMC chips and leave the TSVs and stack assembly to Micron, and since Micron actually has 3D chip experience none of the others have.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The HBM2 would be for the integrated GPU in ZEN APU's. Both Zen and Greenland will be released next year, so that interferes quite a bit with NVidia's roadmap. Especially since AMD has priority access to HBM2.

 

Last I heard AMD had confirmed that their new FX processors wouldn't be APUs. Have they definitely announced that their new APUs are going to have the RAM on the chip itself? Is it only for the GPU part, will they support DDR4 as well? Do you have a news source backing any of this up?

Link to comment
Share on other sites

Link to post
Share on other sites

Last I heard AMD had confirmed that their new FX processors wouldn't be APUs. Have they definitely announced that their new APUs are going to have the RAM on the chip itself? Is it only for the GPU part, will they support DDR4 as well? Do you have a news source backing any of this up?

AMD's APU roadmaps include Zen cores for 2017. As per HBM1/2 integration, it's not confirmed, but honestly 2/4GB is quite possible, as direct integration on-die would provide serious bandwidth for the iGPU. It could also significantly raise prices and lower yields though. It's a risky move either way. Without it, Crystalwell can walk all over AMD in offering good bandwidth to Intel's iGPUs unless AMD comes up with an eDRAM solution themselves.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Last I heard AMD had confirmed that their new FX processors wouldn't be APUs. Have they definitely announced that their new APUs are going to have the RAM on the chip itself? Is it only for the GPU part, will they support DDR4 as well? Do you have a news source backing any of this up?

 

Well it's all rumours, so very difficult to say (see Patrick's post beneath), but according to fudzilla (http://www.fudzilla.com/news/processors/37494-amd-x86-16-core-zen-apu-detailed) the server based Zen APU's will have HBM for the Greenland GPU. But again, it's all speculation.

 

The APUs are targeted for 2017, and don't forget HBM 2 can be made by any JEDEC member. Micron is the most likely to pick it up since Intel can make most of its own HMC chips and leave the TSVs and stack assembly to Micron, and since Micron actually has 3D chip experience none of the others have.

 

May be the case indeed. I think it depends on the production supply of GloFo and their 14nm FF process. I've said it before, just because HBM is a JEDEC standard, it does not magically make other companies capable of actually manufacturing them. The ones who might be able, may not spend resources on it. We can of course speculate, but so far, only SK Hynix has officially stated HBM production. Not even rumours talk about other vendors. This might change of course, but so far we really can't conclude either way (which I acknowledge you don't). We will have to wait and see.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

AMD's APU roadmaps include Zen cores for 2017. As per HBM1/2 integration, it's not confirmed, but honestly 2/4GB is quite possible, as direct integration on-die would provide serious bandwidth for the iGPU. It could also significantly raise prices and lower yields though. It's a risky move either way. Without it, Crystalwell can walk all over AMD in offering good bandwidth to Intel's iGPUs unless AMD comes up with an eDRAM solution themselves.

 

Is there even a point? Until there is a change in the way most programs handle hardware acceleration (like... if they could actually...) we're already at the threshold of how much performance from an iGPU is actually useful. Unless AMD seeks to render everything up to a 370 obsolete from their discrete series?

Link to comment
Share on other sites

Link to post
Share on other sites

Is there even a point? Until there is a change in the way most programs handle hardware acceleration (like... if they could actually...) we're already at the threshold of how much performance from an iGPU is actually useful. Unless AMD seeks to render everything up to a 370 obsolete from their discrete series?

Usefulness for Direct X 12 in multiadaptor, not to mention bringing the tech into laptops to take back some of Intel's ultrabook holdings, is the primary reason. And they built HSA to help them attend to your first point. As to whether or not that ends up a success, I doubt it when even Intel can't get the consumer space to jump on OpenMP, but eh, what do I know? I'm just a C/C++ programmer. And don't forget AMD has the dual-graphics option which could benefit them more if the iGPU isn't entirely beholden to system memory.

 

And I think AMD and Intel are both trying to undermine Nvidia more than anything else right now.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Usefulness for Direct X 12 in multiadaptor, not to mention bringing the tech into laptops to take back some of Intel's ultrabook holdings, is the primary reason. And they built HSA to help them attend to your first point. As to whether or not that ends up a success, I doubt it when even Intel can't get the consumer space to jump on OpenMP, but eh, what do I know? I'm just a C/C++ programmer. And don't forget AMD has the dual-graphics option which could benefit them more if the iGPU isn't entirely beholden to system memory.

 

And I think AMD and Intel are both trying to undermine Nvidia more than anything else right now.

 

That last point seems a bit futile to me. If Nvidia cared about such low end graphical devices they wouldn't be sitting on about three generations of rebrand now. Beyond OEM rip-offs I don't think they care about anything below the x50 tier.

Link to comment
Share on other sites

Link to post
Share on other sites

That last point seems a bit futile to me. If Nvidia cared about such low end graphical devices they wouldn't be sitting on about three generations of rebrand now. Beyond OEM rip-offs I don't think they care about anything below the x50 tier.

It's not just about the lows. If AMD and Intel can both make the x50 or x50TI chips obsolete, there goes 1/4 of all of Nvidia's revenue. Take out the x60/TI chips and you have 3/5 of Nvidia's revenues.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It's not just about the lows. If AMD and Intel can both make the x50 or x50TI chips obsolete, there goes 1/4 of all of Nvidia's revenue. Take out the x60/TI chips and you have 3/5 of Nvidia's revenues.

 

And that is exactly why I claim NVidia is more screwed in the long run, than AMD. Also the reason NVidia is so fixated on vendor lock in, with closed proprietary tech, like gsync and gameworks (apex).

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

It's not just about the lows. If AMD and Intel can both make the x50 or x50TI chips obsolete, there goes 1/4 of all of Nvidia's revenue. Take out the x60/TI chips and you have 3/5 of Nvidia's revenues.

Nvidia, however, are reporting that the average amount a consumer spends is increasing. AMD would also be undercutting their own discrete GPUs -- they will in a single swoop be removing everything up to the 390 from the discrete market and essentially slashing whatever cost overhead they could afford when they weren't also competing on the CPU front. The alternative is their CPUs become much, much more expensive than Intel's.

Link to comment
Share on other sites

Link to post
Share on other sites

And that is exactly why I claim NVidia is more screwed in the long run, than AMD. Also the reason NVidia is so fixated on vendor lock in, with closed proprietary tech, like gsync and gameworks (apex).

Nvidia's positioned with the strongest ARM architecture out there and is poised to improve upon it. If Nvidia can bring ARM back to the forefront of tablets, it can survive duking it out with AMD and Intel in SOCs. ARM will continue having a power advantage for another 3-4 years before the inescapable fact they need an Out of Order engine as strong as Intel's flies into the faces of the various SOC makers, and then it's going to become very apparent Intel can push higher tiers of performance into smaller form factors better than ARM can scale its performance up in the same form factors.

 

This web, this dance, is very intricate. Nvidia may have to just give Intel unfettered access to its GPU architecture in order to keep from being wiped out as an independent entity, but I'm not entirely sure Intel won't turn them down and put the screws to them even by brute force by just chucking more EUs and an HMC cache on-package just to out-size all of Nvidia's smaller lineup. Whether Nvidia can kill AMD before it gets buried itself is the only decisive variable left to settle. The pieces are moving, and the scales are tipping as the cogs tick and turn. Before the end of this decade we'll know which of Intel's two competitors will be gasping its last breaths, and which one will have to face down the big blue dragon one on one in a brutal war the likes of which hasn't been seen since IBM and Intel first locked horns. If somehow AMD can sway the world with Zen when Intel hasn't been able to get multicore programming to take root in consumer computing for a decade, and if Arctic Islands can actually edge out Pascal in HPC, Nvidia may have to retreat into niche sectors like cars, helicopters, guidance systems, and government contracts to survive, though the shareholders won't be happy.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia, however, are reporting that the average amount a consumer spends is increasing. AMD would also be undercutting their own discrete GPUs -- they will in a single swoop be removing everything up to the 390 from the discrete market and essentially slashing whatever cost overhead they could afford when they weren't also competing on the CPU front. The alternative is their CPUs become much, much more expensive than Intel's.

AMD isn't undercutting anything. If the consumer will still spend the same, then if AMD improves dual graphics significantly enough, then the consumer (gamers at least) will be better off with an all-AMD platform with an APU and dGPU, and there'll be nothing Nvidia could do to counter unless it got x86 and x86_64 licenses from Intel and AMD and fast-tracked a desktop version of the Denver successors OR it gave Intel everything in terms of usage rights for its GPU technology for iGPU purposes free of charge. Either that or its dGPUs at every tier would have to blow AMD's single cards out of the water to be able to counter not just the dGPU, but also the iGPU. What are the chances Nvidia pulls that off?

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

AMD isn't undercutting anything. If the consumer will still spend the same, then if AMD improves dual graphics significantly enough, then the consumer (gamers at least) will be better off with an all-AMD platform with an APU and dGPU, and there'll be nothing Nvidia could do to counter unless it got x86 and x86_64 licenses from Intel and AMD and fast-tracked a desktop version of the Denver successors OR it gave Intel everything in terms of usage rights for its GPU technology for iGPU purposes free of charge. Either that or its dGPUs at every tier would have to blow AMD's single cards out of the water to be able to counter not just the dGPU, but also the iGPU. What are the chances Nvidia pulls that off?

 

Sure they are. If their iGPU is as effective as a 960 or 960 Ti (or the equivalent tier) then that's the same thing as saying that it's as effective as a 380 or 380X. In order for their consumers to be spending the same they are going to have to sell these APUs for £300 each, and that's assuming a CPU competitive with an i5. That's before you consider whether most gamers would be more comfortable with a single high end GPU or two 380s in crossfire with one just happening to be on their APU.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia's positioned with the strongest ARM architecture out there and is poised to improve upon it. If Nvidia can bring ARM back to the forefront of tablets, it can survive duking it out with AMD and Intel in SOCs. ARM will continue having a power advantage for another 3-4 years before the inescapable fact they need an Out of Order engine as strong as Intel's flies into the faces of the various SOC makers, and then it's going to become very apparent Intel can push higher tiers of performance into smaller form factors better than ARM can scale its performance up in the same form factors.

 

This web, this dance, is very intricate. Nvidia may have to just give Intel unfettered access to its GPU architecture in order to keep from being wiped out as an independent entity, but I'm not entirely sure Intel won't turn them down and put the screws to them even by brute force by just chucking more EUs and an HMC cache on-package just to out-size all of Nvidia's smaller lineup. Whether Nvidia can kill AMD before it gets buried itself is the only decisive variable left to settle. The pieces are moving, and the scales are tipping as the cogs tick and turn. Before the end of this decade we'll know which of Intel's two competitors will be gasping its last breaths, and which one will have to face down the big blue dragon one on one in a brutal war the likes of which hasn't been seen since IBM and Intel first locked horns. If somehow AMD can sway the world with Zen when Intel hasn't been able to get multicore programming to take root in consumer computing for a decade, and if Arctic Islands can actually edge out Pascal in HPC, Nvidia may have to retreat into niche sectors like cars, helicopters, guidance systems, and government contracts to survive, though the shareholders won't be happy.

 

Indeed, and that is really all NVidia can do at the moment. I doubt however that ARM will be useable any time soon, for mid/high end gaming systems; especially with Zen launching next year, the gap will be huge.

 

It is very intricate, as we have no clue what is in the mould right now other than fairly vague roadmaps. I do however completely disagree that NVidia would give Intel full access to their GPU IP, as that would be the nail in the coffin for NVidia. They have no competence in x86/64, so a CPU would probably be at least 4 years away, if they started today.

What I speculate NVidia doing, is to focus more on proprietary technology and vendor lock in. I see them developing their shield series much more, maybe to the point of having their own NVidia tv console, to combat with PS5/XB2, of course at a much higher price point, but with gsync and mandatory gameworks (at least for high profile titles). I do personally think gsync is dead in 3 years (or at the very least redundant, as it pretty much already is).

 

You seem to have written off AMD already, but I doubt that would happen. I think the investors are more interested in keeping AMD around, especially when the biggest investor, also owns GloFo. As such, whether the income is from AMD or GloFo, is really just a sidenote. Losing that AMD business would probably hurt more, than keeping AMD afloat. Also with what we are seeing AMD working on, I think they will gain large marketshares. AMD is getting control of the rendering market now with the highest performing firepro cards, and with zen AMD can deliver a complete compute bast for servers with both CPU and GPU's. If AMD can survive their debt, they should be good. NVidia needs to reinvent them, or dominate the market to a point, where they can dictate direction.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Anyone else find the announcement of this so shortly after the announcement of the US Gov't investing into an Exascale Supercomputer?

 

Coincidence? I think not.

 

I doubt it's guaranteed, but I have no doubt that AMD is pursuing the contract on that order heavily. They would just love to beat Intel to the contract for supplying that big of a project.

Don't xeons have hyper threading? therefore the new 28 core skylake is really 46 cores? or is that just wrong logic?

and don't thew new skylake xeons have octacpu for mobos support?

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×