Jump to content

Which is the best OS for 4 CPU’s?

G3R489

I need some help with a couple of servers, all are 4 cpu opteron 6380’s running windows server 2008 r2

 

is there a better OS I should I use for a Power Edge R815 


 

 

Also I’ve never tried this before on anything but raspberry pi’s, but is there a way to ‘cluster’ or ‘stack’ them at a later date? I’ve seen a gentleman online do a Linux based cluster but have no clue how

 

I have in total 2 R815’s, Another 2 nodes with identical specs that I was hoping to try cluster, Aswell as an unrelated dual Xeon based, dual Titan render server at the ready, all of which are connected by fiberoptic (Lc-Lc) but if required I have a few 4 port Ethernet cards from around the same date laying about

 



TL;DR - what is best OS for multi Cpu and is it possible to ‘stack’ systems

Link to comment
Share on other sites

Link to post
Share on other sites

Well first of all those are horribly old and slow CPUs, I'd look at selling them off and buying something newer and better that will be vastly faster.

 

As to clustering, you need to be aware that this doesn't allow a single VM, container, application, process to span across multiple systems. Clustering allows you to move these easily between systems and also treat them as a single management entity to deploy these types of services on to.

 

If you have a rather fixed requirement like a single piece of software then the best OS will be the one that this application is best supported or runs best on. If you need to host many different things then a Hypervisor like ESXi/Hyper-V/KVM(or KVM based derivatives) would be the best choice.

 

We'd need to know more about your intended usage of these servers to give good advice, however the general advice that those Opteron 6380 are older than their practical use today so honestly just shouldn't be used at all.

Link to comment
Share on other sites

Link to post
Share on other sites

The Linux family of operating systems is the best for multi-CPU and cluster setups. Period. There's a reason that every single one of the top supercomputers in the world runs Linux: it's the best for that type of computer.

 

If you are unfamiliar with Linux, I'd recommend starting with Ubuntu, as there is a lot of documentation and support resources, it has better software support than most other distributions, and it's still quite stable and robust, even if it isn't the absolute best in that last regard.

Link to comment
Share on other sites

Link to post
Share on other sites

You can run a hypervisor like HyperV, ESXi, or Proxmox. They will let you divvy them up into virtual machines that can be moved between nodes, but that's not exactly like gluing them together into one large machine. (You can't run one large instance over multiple nodes.)

 

I'm preemptively pressing F for your power bill, too. Each of those probably idles at somewhere around 300 watts. (You can check this inside iDRAC, but measuring at the wall with a Kill-a-Watt is more accurate.)

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

I'd look at selling them off and buying something newer and better that will be vastly faster

Lot of 4 costs 100$...

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, SorryClaire said:

Lot of 4 costs 100$...

I mean the entire servers, probably also only worth that much anyway not that I've looked. Servers of that generation and platform are mostly only nice to look at and get hands on experience with that type of hardware or to keep around for historical and sentimental reasons. Actually trying to use them for real workloads, they'll get eaten alive by modern 15W-25W CPUs. Way better used Xeon platforms can be found for very low to reasonable cost.

 

It's much like buying a classic Ferrari and trying to race it, you can get a 90's used Honda Type R and destroy it easily.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, leadeater said:

Well first of all those are horribly old and slow CPUs, I'd look at selling them off and buying something newer and better that will be vastly faster.

 

As to clustering, you need to be aware that this doesn't allow a single VM, container, application, process to span across multiple systems. Clustering allows you to move these easily between systems and also treat them as a single management entity to deploy these types of services on to.

 

If you have a rather fix requirement like a single piece of software then the best OS will be the only that this application is best support or runs best on. If you need to host many different things then a Hypervisor like ESXi/Hyper-V/KVM(or KVM based derivatives) would be the best choice.

 

We'd need to know more about your intended usage of these servers to give good advice, however the general advice that those Opteron 6380 are older than their practical use today so honestly just shouldn't be used at all.


I’m very aware of their age, these R815’s have come from an office clearance place for the cost of £100 for both systems and I paid an added £30 for a 12 spare opteron 6378’s and the fiberoptic connectors

 

These machines are predominantly just test machines & part time NAS fall over servers, they won’t be doing much in the way of heavy tasking, however I was wondering if it was even possible to share loads and potentially run cinebench between them

 

The other 2 are a dual node system just acting as a 10tb NAS for photos and videos, however I have enough SAS SSD’s laying about that i can just throw them in for the test

 

However I was just curious to see if it was even possible, as I had the idea of a 256 core cluster after a few drinks the other night and thought back to my Highschool raspberry pi project

 

However I really appreciate the help, and in all honesty feel I’ve wasted your time a tad as was more curious than anything so apologies for that, and if it is even possible I’d enjoy seeing these run cinebench as a group,

 

but thank you ever so much for the reply, and I’ll take the advice to not buy cheap for later projects

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, YoungBlade said:

The Linux family of operating systems is the best for multi-CPU and cluster setups. Period. There's a reason that every single one of the top supercomputers in the world runs Linux: it's the best for that type of computer.

 

If you are unfamiliar with Linux, I'd recommend starting with Ubuntu, as there is a lot of documentation and support resources, it has better software support than most other distributions, and it's still quite stable and robust, even if it isn't the absolute best in that last regard.


Thank you for the suggestion as my Linux experience is very lacking in all honesty 

 

Ubuntu might have to go as a test OS on one of my other machines while I get the hang of it atleast, as I’ve intended to try learn about Linux, but other than the odd video and a G4 running a very cut down version, I have no real experience using it

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, G3R489 said:

however I was wondering if it was even possible to share loads and potentially run cinebench between them

You can set them up as Network Render Nodes and send render jobs to them from the Cinebench application. A single job won't span across multiple nodes but if you have many renders to run then multiple nodes will speed up the overall total combined render time.

 

9 minutes ago, G3R489 said:

However I really appreciate the help, and in all honesty feel I’ve wasted your time a tad as was more curious than anything so apologies for that, and if it is even possible I’d enjoy seeing these run cinebench as a group,

There is no such thing as a dumb question, well there is by lets gloss over that lol. Not a waste of time at all.

 

9 minutes ago, G3R489 said:

I’ll take the advice to not buy cheap for later projects

Xeons platforms from the LGA2011 and LGA2011v2 are great and fairly reasonable cost wise. They still have largely modern day performance too. I have a lot of LGA1366 servers myself but I wouldn't buy any more of them now.

 

Also honestly if I were offered that price for both I probably would have gotten them too, I don't know why or what I'd do with them but that's just adding to the collection of things I have done the exact same thing.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, leadeater said:

I mean the entire servers, probably also only worth that much anyway not that I've looked. Servers of that generation and platform are mostly only nice to look at and get hands on experience with that type of hardware or to keep around for historical and sentimental reasons. Actually trying to use them for real workloads, they'll get eaten alive by modern 15W-25W CPUs. Way better used Xeon platforms can be found for very low to reasonable cost.

 

It's much like buying a classic Ferrari and trying to race it, you can get a 90's used Honda Type R and destroy it easily.

 

 

25 minutes ago, SorryClaire said:

Lot of 4 costs 100$...


In all honesty as mentioned in another comment, I got 12 for £30 with some extras

 

I very much like the Ferrari vs Honda analogy, however these old Ferraris are sentimental as I bought my first opteron system in a project way knowing they were the beating heart of the Titan Super Computer from 2013, however since, I’ve just used them for testing and as ‘collection pieces’ really only driving them occasionally 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, leadeater said:

You can set them up as Network Render Nodes and send render jobs to them from the Cinebench application. A single job won't span across multiple nodes but if you have many renders to run then multiple nodes will speed up the overall total combined render time.

 

There is no such thing as a dumb question, well there is by lets gloss over that lol. Not a waste of time at all.

 

Xeons platforms from the LGA2011 and LGA2011v2 are great and fairly reasonable cost wise. They still have largely modern day performance too. I have a lot of LGA1366 servers myself but I wouldn't buy any more of them now.

 

Also honestly if I were offered that price for both I probably would have gotten them too, I don't know why or what I'd do with them but that's just adding to the collection of things I have done the exact same thing.


Thats a very smart idea, I’ll will try to run them as render nodes for now until I can learn more about this hyper-v setup

 

And in regards to Xeons, I have a few E5’s very mismatched and 10 or so X5560’s but yet again very old units, however I probably wouldn’t buy many now either

 

And that was the same with me, for that price it’s nothing to sneer at, however the old rack was filled up by that company I’ll tell you that much, systems with 4790k’s for £50 that I’ve since stuck gumtree 2060’s in with an SSD and sold for £500 to a few mates as a cheap gaming pc

 

And thank you, it’s true there’s no such thing as a dumb question, however as you’ll see by my pc in a toaster I’m full of dumb ideas, so odd questions do have to be asked

 

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, Needfuldoer said:

You can run a hypervisor like HyperV, ESXi, or Proxmox. They will let you divvy them up into virtual machines that can be moved between nodes, but that's not exactly like gluing them together into one large machine. (You can't run one large instance over multiple nodes.)

 

I'm preemptively pressing F for your power bill, too. Each of those probably idles at somewhere around 300 watts. (You can check this inside iDRAC, but measuring at the wall with a Kill-a-Watt is more accurate.)


I’ll look into that, thank you very much for the suggestion, and I don’t expect to glue, but atleast share a few resources where possible

 

I will dread it, but my power bill is killer either way with other experiments, so trust me, I’m used to it

however I will go into iDRAC and cry in a little bit 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, G3R489 said:


I’ll look into that, thank you very much for the suggestion, and I don’t expect to glue, but atleast share a few resources where possible

 

I will dread it, but my power bill is killer either way with other experiments, so trust me, I’m used to it

however I will go into iDRAC and cry in a little bit 

Unaware how it’s doing this…

image.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Xeons platforms from the LGA2011 and LGA2011v2 are great and fairly reasonable cost wise. They still have largely modern day performance too. I have a lot of LGA1366 servers myself but I wouldn't buy any more of them now.

12th gen Dell PowerEdge servers (Rx20) are pretty much at the price/performance sweet spot now. (HP Gen8s are relatively affordable too, but they've been putting more and more of their updates and drivers behind a support contract pay wall.) I replaced a bunch of older machines in my homelab with four of those (one R720xd storage server and three R620 compute nodes); no regrets so far.

 

Just check the service tag of any used servers you pick up to see which CPUs they came with; older examples need a BIOS update to use V2s (and that can be a process if they're running version 1.xx.xx). Honestly I wouldn't run anything older than that; LGA1366 is getting long in the tooth, and forget about LGA771. 

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Needfuldoer said:

HP Gen8s are relatively affordable too, but they've been putting more and more of their updates and drivers behind a support contract pay wall

Only ones they put behind the contract or in warranty requirements is System BIOS and the Service Pack for ProLiant ISO. Can't think of any others I've seen that require login to get the download. Not really a problem for me since I do have an account.

 

Also in general the Gen8's can be hit or miss as there was a known bug that caused the iLO flash media to prematurely fail and while an iLO firmware update was released to correct that many didn't apply it or did so too late (myself included, RIP). So while I very much prefer HPE myself caution around buying Gen8's needs to be understood, you could get a lemon.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Needfuldoer said:

12th gen Dell PowerEdge servers (Rx20) are pretty much at the price/performance sweet spot now. (HP Gen8s are relatively affordable too, but they've been putting more and more of their updates and drivers behind a support contract pay wall.) I replaced a bunch of older machines in my homelab with four of those (one R720xd storage server and three R620 compute nodes); no regrets so far.

 

Just check the service tag of any used servers you pick up to see which CPUs they came with; older examples need a BIOS update to use V2s (and that can be a process if they're running version 1.xx.xx). Honestly I wouldn't run anything older than that; LGA1366 is getting long in the tooth, and forget about LGA771. 

Thank you for the recommendation, to be honest I might have to look for one, however as I’m moving house moneys a tad tighter right now so will be a while before I get a new server, however when I’m ready, if you’re still active would love some advice at a later date 

 

34 minutes ago, leadeater said:

Only ones they put behind the contract or in warranty requirements is System BIOS and the Service Pack for ProLiant ISO. Can't think of any others I've seen that require login to get the download. Not really a problem for me since I do have an account.

 

Also in general the Gen8's can be hit or miss as there was a known bug that caused the iLO flash media to prematurely fail and while an iLO firmware update was released to correct that many didn't apply it or did so too late (myself included, RIP). So while I very much prefer HPE myself caution around buying Gen8's needs to be understood, you could get a lemon.

But thank you both for the help as I really appreciate this, and either way thank you for the warning signs on future servers

 

Have a great rest of your day wherever you are and thank you for all these suggestions as it’s really helped me out

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/4/2022 at 8:34 AM, YoungBlade said:

The Linux family of operating systems is the best for multi-CPU and cluster setups. Period. There's a reason that every single one of the top supercomputers in the world runs Linux: it's the best for that type of computer

 

 

Pretty sure this comment is being made by somebody who's never set up a type 1 hypervisor nor understands the architectural and application differences between a 'supercomputer' and classic enterprise x86 stack. Uh, period. 

 

I was running Winframe (NT 3.51) on quad processors back in 98 alongside supported Unix and AS400 systems. Even today Windows Hyper-v Core matches or bests VMware and the other pseudo Linux based Hypervisors in many  raw benchmarks other than those with massively large memory pools. Simply put, an operating system cannot make an application more multi threaded efficient than it was coded for. Adding a hypervisor In front of things doesn't change the fact we are running an application as a goal.

 

I've stated many times here and I will state it again that for the most part used x86 servers are junk that need a boat and a rope. Ive been deploying them for over two decades. We've seen a massive improvement in per core efficacy the past 2-3 years than makes that old iron worthless other than for generating heat. Some older Xeons have massive core counts that can do some actual work with highly mulitgreaded workloads like video encoding, but thats it.

 

Some of these new i5s at $200 and mounted on a budget MB will utterly humiliate most of the servers bantered here. Double or triple the per core efficacy in some cases. AMD with their massive PCI lane improvements, etc.Your app, whatever it is will get more done on a $150 refurbished i7 than many old dual socket Xeon boxes. 

 

There is nothing special about old x86 servers other have ECC/registered memory and being loud. Xeons and Opterons don't run code better or faster. Can't tell you how many times I've exported a production VM off an older 'Server', loaded it up on a newer desktop PC that cost a fraction the price and seen a massive performance increase. 

 

OP needs to quantify whats he's running and what it needs to run optimally. There's also the rabbit hole of clustering at the hypervisor vs storage vs application layer. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

But we're not talking about a production environment, where we have to accomplish things and store important data. We just want to putz around with some VMs in Proxmox. If the ruthless economy PC will do, so can old server junk!

 

Older server parts are ludicrously cheap for what you can get out of them. You can put together a server or workstation with dual E5-2680v2s (totaling 20 cores, 40 threads) with 64 gigs of RAM and a few 600 gig 10k SAS drives for around $350. Just that much name-brand DDR4 would cost $250 new.

 

I put together an R620 with those parts (but 128 gigs of RAM and a 240 gig boot SSD), and it idles around 120 watts. Is it half as fast per clock as an i5-11600k? Of course it is. But it's fast enough for what I wanted it to do, and it was 1/4 the price of assembling an equivalent whitebox from new parts. It will do for what I want to run until the Xeon Scalable 14th-gen servers start phasing out of production use and hit the secondary market for reasonable prices.

 

Of course, there's a point where this does not make sense anymore. I wouldn't touch anything older than Sandy Bridge on the Intel side, or Zen on AMD. The last R710 I used (dual X5675s, five 3 TB drives, 16 gigs of DDR3-1066) absolutely wasn't worth running anymore, and the PowerEdge 2950 (with Core 2 Xeons and DDR2 FB-DIMMs) has been a joke on /r/homelab for years.

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Needfuldoer said:

E5-2680v2s

Realistically these are only "significantly slow" now that 12th Gen is out. Heck on the Xeon side some workloads were slower on Skylake-SP than Broadwell-EP (E5v4). There's a middle road balance where server parts make a lot more sense and that's if you want to do more than run a basic Plex server and a NAS. If you can find something used and on consumer platform it'll be faster, quieter and easilier to work with for the general enthusiast.

 

Mid range and higher end needs are only now changing in this regard since the void between the number of cores for consumer desktop and server platforms isn't so vast, and maximum RAM possible is now much larger.

 

I used to run two entire AD Domains with fileserver vms, Exchange, SCCM/WDS, SQL with MSCS and a few other things I forget. So easily over 10 VMs per simulated site so 20+ total, long story short achieving this on consumer platform before Zen 2 would have been impossible on two counts, vCPU:pCPU contention and host system ram. I ran all this on a single server.

 

Plus I literally have multiple entire full large static bags of DDR3 RDIMM ram, 4GB/8GB/16GB sizes.

 

Neither are modern(ish) Gen8 and newer server all that power hogs, dual CPU Gen8 without disks will idle below 100W and basically every "gamer PC" will idle above that due to aggressive BIOS defaults, obviously you can fix that but you have to know to do it.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×