Jump to content

Asrock Rack and SuperMicro announces Xeon Phi systems

16 minutes ago, IvantheDugtrio said:

The wrapper for the pipeline is python but the actual HPC is a mix of C and Java. I'll be working primarily on the C code.

 

I'll also see if I can configure the pipeline to run certain tools on specific nodes of this asymmetrical torque cluster (ported C tools on the Xeon Phi, everything else on the regular Xeons).

Even though it's a Python wrapper, you have to use double the memory and spend the time massaging the data back and forth. For Pete's sake why is the bioinformatics community still dealing with the shit language (for HPC at least) that is Python in the face of C++ 11, let alone 14?! And Java is not much better, but at least it can natively pass data to C/C++ routines without copying.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, patrickjp93 said:

Even though it's a Python wrapper, you have to use double the memory and spend the time massaging the data back and forth. For Pete's sake why is the bioinformatics community still dealing with the shit language (for HPC at least) that is Python in the face of C++ 11, let alone 14?! And Java is not much better, but at least it can natively pass data to C/C++ routines without copying.

I mean.... the nuclear community is still dealing with a massive proportion of Fortran code....

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Curufinwe_wins said:

I mean.... the nuclear community is still dealing with a massive proportion of Fortran code....

Fortran was designed for high performance scientific computing. It's just not nice for doing it in a modern way where coding can be done much faster in C++ while having the same or better efficiency. That's why Intel and IBM still provide Fortran compilers for existing codebases.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, patrickjp93 said:

Fortran was designed for high performance scientific computing. It's just not nice for doing it in a modern way where coding can be done much faster in C++ while having the same or better efficiency. That's why Intel and IBM still provide Fortran compilers for existing codebases.

I am well aware. Also by Fortran, I mean FORTRAN 77. I have the 80 pounds of original punch card code sitting around for one of those programs.

 

Unfortunately the desire to maintain perfect backwards compatibility makes moving to a new language a bit more of a large upfront time investment. And most of these codes aren't actively maintained by more than a single person.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, Curufinwe_wins said:

I am well aware. Also by Fortran, I mean FORTRAN 77. I have the 80 pounds of original punch card code sitting around for one of those programs.

 

Unfortunately the desire to maintain perfect backwards compatibility makes moving to a new language a bit more of a large upfront time investment. And most of these codes aren't actively maintained by more than a single person.

Bring in a newbie who loves C++ and put them to work doing code conversion. I just spent a week porting 2500 lines of Java to C++. It can be done.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, patrickjp93 said:

Bring in a newbie who loves C++ and put them to work doing code conversion. I just spent a week porting 2500 lines of Java to C++. It can be done.

We have a little bit over 100,000 lines of Fortran to convert (for that program I mentioned), and literally backwards compatibility with previous inputs needs to be flawless.

 

I'm sure it's doable, just no one wants to devote resources to it... Since technically speaking it still works...

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Curufinwe_wins said:

We have a little bit over 100,000 lines of Fortran to convert (for that program I mentioned), and literally backwards compatibility with previous inputs needs to be flawless.

 

I'm sure it's doable, just no one wants to devote resources to it... Since technically speaking it still works...

If by inputs you mean command line arguments, that's already there. If it's a matter of file formats being read, you can customize a stream iterator for that. If anything else, it's exactly the same.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/16/2016 at 7:53 PM, patrickjp93 said:

Even though it's a Python wrapper, you have to use double the memory and spend the time massaging the data back and forth. For Pete's sake why is the bioinformatics community still dealing with the shit language (for HPC at least) that is Python in the face of C++ 11, let alone 14?! And Java is not much better, but at least it can natively pass data to C/C++ routines without copying.

Lol the wrapper is just passing commands and logging output. The performance impact isn't there. Python is used as an easy way to manage the versions of the actual tools and libraries used via anaconda. The wrapper also provides a convenient way to use cluster computing for all of the programs run under it. All of the real HPC is done by the C, C++, and Java programs. 

 

Java is used in HPC for its portability across different architectures. The real performance comes from the JRE and its ability to efficiently use system resources. Some of our Java-based tools such as the GATK support AVX instructions for further acceleration. Using Java also cleans up the code a lot compared with C++ for complex programs such as variant callers that have to recognize a multitude of mutations in samples. I'm not aware of any C-based variant caller out there simply because of code complexity reasons. The tools that do use C/C++ are the aligners, compression/decompression tools, and stream editors that rely on a single algorithm for processing data. 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, IvantheDugtrio said:

Lol the wrapper is just passing commands and logging output. The performance impact isn't there. Python is used as an easy way to manage the versions of the actual tools and libraries used via anaconda. The wrapper also provides a convenient way to use cluster computing for all of the programs run under it. All of the real HPC is done by the C, C++, and Java programs. 

 

Java is used in HPC for its portability across different architectures. The real performance comes from the JRE and its ability to efficiently use system resources. Some of our Java-based tools such as the GATK support AVX instructions for further acceleration. Using Java also cleans up the code a lot compared with C++ for complex programs such as variant callers that have to recognize a multitude of mutations in samples. I'm not aware of any C-based variant caller out there simply because of code complexity reasons. The tools that do use C/C++ are the aligners, compression/decompression tools, and stream editors that rely on a single algorithm for processing data. 

That's not true. Run the program in raw C++ and you'll see a difference in memory usage that's nearly a factor of 2x.

 

The portability argument is BS. You can compile C++ on any architecture as long as you don't use OS-specific libraries or system calls or do in-line assembly of the wrong architecture. And JREs and efficiency should never be in the same sentence. They're never efficient.

 

HAHAHAHA! Cleans up the code? BS. You can write the same program in fewer lines and fewer words per line using C++ 11 and 14.

 

Despite the fact I don't know what you mean by variant callers, if you mean overloaded function calls with varying variables, that can be done easily using variadic templates under C++ 11 and 14.

 

All of your tools should be in C++ both for productivity and speed reasons.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Wasnt Intel Phi a reply to nvidias super computing GPUs? 

I wonder for those who know this market how they are doing in terms of market share.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, lukart said:

Wasnt Intel Phi a reply to nvidias super computing GPUs? 

I wonder for those who know this market how they are doing in terms of market share.

Intel is stealing some market share, but the majority of it is still in Nvidia's hands. And the history of the Xeon Phi is a bit more complicated than that. Project Larabee started out as Intel's attempt to enter the dGPU market. It licensed a ton of tech from Nvidia and ran a collaborative project spending more money than Nvidia and AMD/ATI had combined since they were founded (on GPU research). A couple months before launch was due, Nvidia saw what a threat it would be and yanked a number of fundamental patents, buying out of parts of the contract. It basically made it impossible to sell the product as-is. Intel quickly pivoted and turned it into an x86-native coprocessor.

 

Now with Knight's Landing you can get it as a coprocessor or as a host CPU. The compute density per node is much higher than any IBM, AMD, or ARM system.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 weeks later...

CPU size comparison and new type of mounting

https://www.servethehome.com/big-sockets-look-intel-lga-3647/

 

 

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 24.8.2016 at 6:48 AM, Ryan_Vickers said:

That is a massive socket...

eh , G34 was bigger (or is atleast the same size :P)  

 

Socket_G34.jpg

RyzenAir : AMD R5 3600 | AsRock AB350M Pro4 | 32gb Aegis DDR4 3000 | GTX 1070 FE | Fractal Design Node 804
RyzenITX : Ryzen 7 1700 | GA-AB350N-Gaming WIFI | 16gb DDR4 2666 | GTX 1060 | Cougar QBX 

 

PSU Tier list

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×