Jump to content

[Press Release] IBM and NVIDIA Launch Supercomputer Centers of Excellence

zMeul

source: IBM

 

post-96912-0-40256800-1436893926.jpg

 

IBM (NYSE: IBM) along with NVIDIA and two U.S. Department of Energy National Laboratories today announced a pair of Centers of Excellence for supercomputing – one at the Lawrence Livermore National Laboratory and the other at the Oak Ridge National Laboratory. The collaborations are in support of IBM’s supercomputing contract with the U.S. Department of Energy. They will enable advanced, large-scale scientific and engineering applications both for supporting DOE missions, and for the Summit and Sierra supercomputer systems to be delivered respectively to Oak Ridge and Lawrence Livermore in 2017 and to be operational in 2018.

 

As the new supercomputers are being readied for installation, the Centers of Excellence will prepare the way for their optimum use in scientific research in such critical areas as energy, climate research, cosmology, biophysics, astrophysics and medicine, as well as in national nuclear security and other national security interests.

In an era of increasing global competition in high performance computing, the Centers are designed to enable the U.S. Department of Energy’s National Laboratories to sustain innovation leadership in science and technology while also driving down energy consumption and costs of computing. 

At each of the Centers, teams of technologists will gain crucial application perspective that will complement the hardware and software development of Summit and Sierra and will enable application readiness at the time of installation. Early application code innovation, executed in tandem with the system development, allows important two-way feedback between the system developers and the application writers. This will ensure that the ongoing system design will correctly and effectively support necessary user applications.

Incorporating IBM’s advanced POWER processors with next-generation NVIDIA® Tesla® GPU accelerators and the NVIDIA NVLink™ high-speed processor interconnect, Summit and Sierra will use a highly efficient, high-performance data-centric computing approach that minimizes data in motion, thereby helping to optimize problem solving and time to solution while also greatly reducing overall energy consumption.

“Application code innovation is a vital component of making sure our facilities are prepared to take advantage of the performance of the new supercomputers,” said Michel McCoy, Program Director for Advanced Simulation and Computing at the Lawrence Livermore National Laboratory. “By partnering with IBM and NVIDIA, the Centers of Excellence bring together the people who know the science, the people who know the code, and the people who know the machines – ensuring we are innovating across the board so that Sierra and Summit will be primed to achieve their missions for national security and scientific advancement as soon as they’re delivered.”

In addition to their data-centric design, the systems follow an OpenPOWER design concept that uses IBM’s open POWER architecture, as well as OpenPOWER Foundation member technology, including NVIDIA GPU and NVLink technologies, and Mellanox’s EDR 100Gb/s InfiniBand system interconnect.. Applications developed at the Centers of Excellence will take full advantage of current and future innovations introduced by the growing OpenPOWER community of developers led in part by over 145 OpenPOWER Foundation members worldwide. Code innovations realized at the Centers will also benefit general purpose OpenPOWER-based commercial systems to be introduced by IBM and others. 

A Collaborative Approach to Application Code Innovation

The two Centers of Excellence collaborations are uniquely set up to support each of the Oak Ridge and Lawrence Livermore labs’ specific missions. With each, key computational scientists from IBM and NVIDIA work closely with the applications scientists from the labs to develop tools and technologies that will optimize codes and achieve the best performance on Summit, Sierra and other general use systems that follow the OpenPOWER design concept. Together, the teams are developing new ways to think about the programming models, algorithms, applications and computer performance.

“The work accomplished through the Centers of Excellence will be a milestone in our collaboration with the U.S. Department of Energy,” said Dave Turek, IBM Vice President of HPC Market Engagement. “It is about more than just delivering our new data-centric OpenPOWER-based hardware systems. Along with NVIDIA, our scientists are ensuring Oak Ridge and Lawrence Livermore are able to get the most out of these revolutionary supercomputers to reach the next level of scientific discovery. In addition, our expectation is that many of the codes that are worked on will find benefit in other sectors of the U.S. economy.”

The work of the Centers of Excellence is managed by a technical steering group, which includes participants from IBM, NVIDIA and from Lawrence Livermore, Oak Ridge and Argonne National Laboratories. This collaborative approach is designed to ensure that critical applications are able to run on all of the U.S. Department of Energy’s supercomputers.

Shaping Future Scientific Discovery

Work is already underway to update and develop applications that have the potential to shape scientific discovery for years to come.  Center of Excellence scientists will support development of at least 13 applications for Oak Ridge’s Summit supercomputer. These applications were recently selected through the Center for Accelerated Application Readiness (CAAR) program. Summit and its applications will support the Office of Science in its science and energy mission, advancing knowledge in critical areas of government, academia and industry. The modeling and simulation applications span the sciences, from cosmology to biophysics to astrophysics. One of Oak Ridge’s applications will focus on advancing Earth system models for climate research while another will map the Earth’s interior using big data for seismology research.

At Lawrence Livermore’s Center of Excellence, IBM and NVIDIA experts will provide expert knowledge and understanding of the accelerated architecture to help national security applications evolve rapidly to support the safety, reliability and security of the nuclear stockpile. These experts will also support efforts in a broad range of computational science areas of importance to national security, for instance bio-security, energy security and global warming.

The Centers of Excellence are leveraging current IBM Power Systems and OpenPOWER-based technologies for the required programming efforts, with the first prototype of the advanced supercomputers expected to be available to system developers and application writers in late 2015. The Centers of Excellence will continue to deploy updated prototype systems in order to ensure the ongoing system design will correctly and effectively support the optimized applications.

----

I have no real thoughts on this; other than, as I said, nVidia has Tesla accelerators on priority rather than desktop GPUs - so, don't expect desktop Pascal anytime soon

Link to comment
Share on other sites

Link to post
Share on other sites

It would be fitting if Intel picks up AMD lol.

ROG X570-F Strix AMD R9 5900X | EK Elite 360 | EVGA 3080 FTW3 Ultra | G.Skill Trident Z Neo 64gb | Samsung 980 PRO 
ROG Strix XG349C Corsair 4000 | Bose C5 | ROG Swift PG279Q

Logitech G810 Orion Sennheiser HD 518 |  Logitech 502 Hero

 

Link to comment
Share on other sites

Link to post
Share on other sites

Good to see these two working together.

 

  1. GLaDOS: i5 6600 EVGA GTX 1070 FE EVGA Z170 Stinger Cooler Master GeminS524 V2 With LTT Noctua NFF12 Corsair Vengeance LPX 2x8 GB 3200 MHz Corsair SF450 850 EVO 500 Gb CableMod Widebeam White LED 60cm 2x Asus VN248H-P, Dell 12" G502 Proteus Core Logitech G610 Orion Cherry Brown Logitech Z506 Sennheiser HD 518 MSX
  2. Lenovo Z40 i5-4200U GT 820M 6 GB RAM 840 EVO 120 GB
  3. Moto X4 G.Skill 32 GB Micro SD Spigen Case Project Fi

 

Link to comment
Share on other sites

Link to post
Share on other sites

It would be fitting if Intel picks up AMD lol.

No no no  no no no no no no no no no no no  no no no no no no  no  no no no no n no o o . . .

Link to comment
Share on other sites

Link to post
Share on other sites

It would be fitting if Intel picks up AMD lol.

 

No no no  no no no no no no no no no no no  no no no no no no  no  no no no no n no o o . . .

Intel wouldn't pick up the whole thing. If AMD went down it would be split up so Intel gets the GPU portion and Nvidia gets the CPU portion. What would really suck is if Intel acquired Nvidia which would screw over IBM and AMD very quickly.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Intel wouldn't pick up the whole thing. If AMD went down it would be split up so Intel gets the GPU portion and Nvidia gets the CPU portion. What would really suck is if Intel acquired Nvidia which would screw over IBM and AMD very quickly.

They basically work together already. I don't see how Nvidia would even want the CPU portion of AMD. If anything, they would go straight for Intel.

Link to comment
Share on other sites

Link to post
Share on other sites

They basically work together already. I don't see how Nvidia would even want the CPU portion of AMD. If anything, they would go straight for Intel.

Because Nvidia needs to differentiate, and while it has made progress with Denver on ARM, ARM is not a ubiquitous Instruction Set the way x86 is. Even in phones Intel is now making progress against Qualcomm and stands to make more as the Atom and Quark teams learn from each other and start separating from Intel's old diehard mantra of performance first (which only gets so far in ultra mobile).

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Because Nvidia needs to differentiate, and while it has made progress with Denver on ARM, ARM is not a ubiquitous Instruction Set the way x86 is. Even in phones Intel is now making progress against Qualcomm and stands to make more as the Atom and Quark teams learn from each other and start separating from Intel's old diehard mantra of performance first (which only gets so far in ultra mobile).

 

Have you ever really thought about what you are saying? NVidia wants AMD's CPU business, and Intel wants AMD's GPU business. So both wants to be AMD. That is why AMD has all the console market, because they can do both. It is also the reason AMD has huge potential, that should be unlocked next year.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Have you ever really thought about what you are saying? NVidia wants AMD's CPU business, and Intel wants AMD's GPU business. So both wants to be AMD. That is why AMD has all the console market, because they can do both. It is also the reason AMD has huge potential, that should be unlocked next year.

 

Unless I'm mistaken, the only reason AMD has the console market is because they gave the LOWEST bill to MS, Sony and Nintendo. Not because they're the only ones who can handle the capacity or units required or whatever. 

 

Good to see these two working together.

 

Well, IBM and Nvidia are already so heavily involved in the supercomputing segment, it was only inevitable that they collaborated on systems and ventures. 

Link to comment
Share on other sites

Link to post
Share on other sites

Have you ever really thought about what you are saying? NVidia wants AMD's CPU business, and Intel wants AMD's GPU business. So both wants to be AMD. That is why AMD has all the console market, because they can do both. It is also the reason AMD has huge potential, that should be unlocked next year.

No, neither of them want to be AMD. Intel and Nvidia have very similar ideas about what heterogeneous integration should look like, and that vision differs very much from HSA. That said, Nvidia is vulnerable right and could take a heavy pounding when Knight's Landing comes out, especially if IBM can't deliver significant improvement in Power 8+. If they lose the compute density crown to Intel even for a few months, the HPC business will swing clear out of IBM and Nvidia's court. Intel has also been after having dGPU technology for quite a while now. It would prefer to take all of Nvidia's IP since they're already using a fair amount of it in their integrated solutions. Having AMD's IP would be the second choice.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Unless I'm mistaken, the only reason AMD has the console market is because they gave the LOWEST bill to MS, Sony and Nintendo. Not because they're the only ones who can handle the capacity or units required or whatever. 

 

You are not mistaken, but the point is that only AMD has the technology (and IP rights) to provide both CPU and GPU solutions, that are satisfactory for such gaming machines. Nvidia does not seem to have the necessary CPU capabilities (or IP rights), or at least could not provide a solution for the money. Intel has all the CPU stuff needed, but lacks GPU IP and has no idea how to make an actual gaming graphics card. This is also why AMD will continue to supply consoles in the future.

 

No, neither of them want to be AMD. Intel and Nvidia have very similar ideas about what heterogeneous integration should look like, and that vision differs very much from HSA. That said, Nvidia is vulnerable right and could take a heavy pounding when Knight's Landing comes out, especially if IBM can't deliver significant improvement in Power 8+. If they lose the compute density crown to Intel even for a few months, the HPC business will swing clear out of IBM and Nvidia's court. Intel has also been after having dGPU technology for quite a while now. It would prefer to take all of Nvidia's IP since they're already using a fair amount of it in their integrated solutions. Having AMD's IP would be the second choice.

 

Agreed about NVidia. They do have the shortest straw, as integrated GPU's will eat away at the GPU market, from the ground (low end) up. At some point discrete GPU's will have to be very high end to perform better than an iGPU. This is why NVidia is so fixated in vendor lock in, with G-sync, various shield hardware, and to some extent GameWorks: To keep consumers loyal and stick to the, even as their products go obsolete. It's a clever strategy to focus on making loyal fanboys.

 

Not sure what will be of HSA, but Intels integrated GPUs will "only" be used for compute and very basic 3 rendering. AMD has a huge advantage in this area, especially if they can make iGPU's do double precision/10 bit in tandem with dedicated firepro cards. Another area that would hurt NVIdia hard.

 

As for NVidia, they will probably have to be even bigger on the professional market or they need to help steer the industry away from x86/64 to ARM or something similar, they can be part of. Or maybe the entire industry will just go towards floating point only and scrap integer completely. Neither seems to be viable inthe near future.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

You are not mistaken, but the point is that only AMD has the technology (and IP rights) to provide both CPU and GPU solutions, that are satisfactory for such gaming machines. Nvidia does not seem to have the necessary CPU capabilities (or IP rights), or at least could not provide a solution for the money. Intel has all the CPU stuff needed, but lacks GPU IP and has no idea how to make an actual gaming graphics card. This is also why AMD will continue to supply consoles in the future.

 

 

Agreed about NVidia. They do have the shortest straw, as integrated GPU's will eat away at the GPU market, from the ground (low end) up. At some point discrete GPU's will have to be very high end to perform better than an iGPU. This is why NVidia is so fixated in vendor lock in, with G-sync, various shield hardware, and to some extent GameWorks: To keep consumers loyal and stick to the, even as their products go obsolete. It's a clever strategy to focus on making loyal fanboys.

 

Not sure what will be of HSA, but Intels integrated GPUs will "only" be used for compute and very basic 3 rendering. AMD has a huge advantage in this area, especially if they can make iGPU's do double precision/10 bit in tandem with dedicated firepro cards. Another area that would hurt NVIdia hard.

 

As for NVidia, they will probably have to be even bigger on the professional market or they need to help steer the industry away from x86/64 to ARM or something similar, they can be part of. Or maybe the entire industry will just go towards floating point only and scrap integer completely. Neither seems to be viable inthe near future.

 

Intel's disadvantage in integrated graphics is waning very quickly. From an architectural standpoint they are not far behind AMD at this point. And it built its architecture for compute in the first place. It will probably be dedicated for physics in gaming before the end of the decade.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I would have been nice to have parts of the posts quotes instead of the whole damn thing.

Computing enthusiast. 
I use to be able to input a cheat code now I've got to input a credit card - Total Biscuit
 

Link to comment
Share on other sites

Link to post
Share on other sites

So Pascal will be delayed and for the first time in a long time , AMD will release their new graphics card before nVIDIA ? 

... Life is a game and the checkpoints are your birthday , you will face challenges where you may not get rewarded afterwords but those are the challenges that help you improve yourself . Always live for tomorrow because you may never know when your game will be over ... I'm totally not going insane in anyway , shape or form ... I just have broken English and an open mind ... 

Link to comment
Share on other sites

Link to post
Share on other sites

So Pascal will be delayed and for the first time in a long time , AMD will release their new graphics card before nVIDIA ? 

Theoretically it's either that or Nvidia just gets the lowest binned HBM 2.0 chips to start off.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Unless I'm mistaken, the only reason AMD has the console market is because they gave the LOWEST bill to MS, Sony and Nintendo. Not

AMD has the console market because nVidia wasn't even interested in participating in current-gen consoles
Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 months later...

AMD has the console market because nVidia wasn't even interested in participating in current-gen consoles

 

nVIDIA was demanding more money from Microsoft. nVIDIA couldn't provide the sort of deal Microsoft was looking for (CPU and GPU are a low cost). Microsoft went to AMD, AMD provided them with exactly what Microsoft was looking for. An APU.

Sony went with AMD, Nintendo as well.

It has nothing to do with nVIDIA not being interested. It has to do with nVIDIA not being able to offer what Microsoft was looking for. MS wanted them to sell their GPUs for less considering they also had to acquire CPUs. nVIDIA refused. MS went elsewhere.

DX12/Vulcan era is going to prove just how boneheaded a move this was on nVIDIAs part. Mainly due to the fact that Intel will likely squeeze Tesla with Knights Landing (x86 compatible and doesn't require a CPU execution core). This will also push the further development of OpenCL. Which will help AMD compete in this market as well.

Give it 5-8 years and nVIDIA won't be the top dog in the Supercomputing world. It will likely be Intel. If AMD survives, they'll likely surpass nVIDIA as well due to HSA and the familiarity with OpenCL which is likely to ensue.

That's what I see happening.

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)

Link to comment
Share on other sites

Link to post
Share on other sites

DX12/Vulcan era is going to prove just how boneheaded a move this was on nVIDIAs part

let's not get ahead or ourselves, shall we?!

DX12 / Vulkan era is not here, yet, and true DX12 fully compatible GPUs haven't been built ... yet

 

if you look at the GCN 1.0 and GCN 1.1, that ended up in XB1 and PS4, there is a very big difference - did MS influenced AMD's designs? I have no doubt

where exactly does SONY's and Mike Cerny's influence come into play? I have no idea, but they did influenced it too - at PS4's launch, Mike Cerny held a conference and presented a lot of his team's influence on PS4's APU

 

---

 

the should scare "PC master race" - because of consoles is that we go towards low-level APIs

and let's not forget that the most interesting feature, unified memory, is still to show up

Link to comment
Share on other sites

Link to post
Share on other sites

DX12/Vulcan era is going to prove just how boneheaded a move this was on nVIDIAs part. Mainly due to the fact that Intel will likely squeeze Tesla with Knights Landing (x86 compatible and doesn't require a CPU execution core). This will also push the further development of OpenCL. Which will help AMD compete in this market as well.

Give it 5-8 years and nVIDIA won't be the top dog in the Supercomputing world. It will likely be Intel. If AMD survives, they'll likely surpass nVIDIA as well due to HSA and the familiarity with OpenCL which is likely to ensue.

 

Been trying to convey this message for some time in here. Out of the three big (AMD/Intel/NVidia), the latter seems to be in the biggest trouble long term. People seem to underestimate the importance on consoles, even in PC gaming. Every single dev making games for the consoles, will focus on async compute and GCN architecture.

 

AMD can deliver everything needed for a gaming machine (especially with ZEN on the horizon). Intel cannot in the gaming world, but can in the compute world. NVidia? Only graphics and weak ARM CPU's anyone can make.

 

There is a reason why NVidia is so fixated on planned obsolescence, overpriced GPU's and proprietary tech that forces vendor lock in.

 

let's not get ahead or ourselves, shall we?!

DX12 / Vulkan era is not here, yet, and true DX12 fully compatible GPUs haven't been built ... yet

 

Async compute is already happening and several console gamedevs are already playing around with it. Either way both DX12 and Vulkan should remove NVidia's driver performance advantage. Sure NVidia are competent enough to figure something out, but they will be at a disadvantage.

 

What do you mean fully compatible though? Most GPU's will neve fully support extensions, and they usually don't matter that much as they are niche. I do however expect both AMD and NVidia to support the extensions too next year.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×