Jump to content

Best Budget Server Stack For Performance

I am building my own local server hosting company, and I need advice on what servers to stick with in the long haul. I do not have much money, but am starting a new job soon and don't have many expenses, so I will be able to put most of my money in to this. Still, though, my money is limited and so I am mostly looking in to older generation servers. I am specifically wanting to go with Dell PowerEdge servers because of the features of their iDRAC management system.

I have decided to go with PowerEdge R430s, as they are pretty affordable for their power. I can source them for $200 for a barebones system, or $400-500 for a system with one or two low-end CPUs and a small amount of RAM. I plan on mostly maxing out these servers (40 cores w/ about 192-256GB of ram), so parts in them when I initially purchase them will likely be sold immediately or used only until I can afford to upgrade them. Again, I am starting this from scratch, so I don't need insane computing power, probably until a specific customer requires it and is willing to pay the premium.

 

I have a few questions:

Is going with servers so old a good idea? It allows me to begin at a time that I wouldn't otherwise be able to, and I believe that they have a comparable amount of compute power and capability as a slightly newer server would have.

A full size rack costs about as much as as an upgraded r430, so for now they will be on the floor in a well ventilated area. At how many servers should I probably buy a rack? I was thinking before I bought my 4th one, or by the time I needed to buy a large switch.

I am planning on ordering a rackmount switch, PDU, APC, and maybe a cheap KVM switch (Dell branded with adapters that allow for USB and VGA over ethernet cable). Do I need any other equipment for now? (I have a modem and router of course)

At what point should I buy a dedicated backup server, and would it be okay if it was cheaper and a bit older? I think until then I will be using two separate arrays, one with high capacity drives, others with mid-capacity SSDs. The servers have 10 2.5" drive bays, which would allow for me to run something like 8 2tb SSDs and 2 8tb HDDs. 

I am planning on upgrading each server to 40 cores with 192-256gb of ram. Should I instead buy more servers with more modest amounts of power? (like two with 20 cores and 128gb ram, as opposed to one 40 w/ 256) This would also save costs, as I can buy about 3 10 core CPUs to every 20 core, and I could buy lower-capacity DIMMs, also saving a little bit of money)

 

I know this is a rambly post, but finally, please don't say something like "people are better at this, or this will never take off" as I want to do this, it is my money, and hearing that would likely make me less prone to admitting failure if it happens anyways.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, NPDPdev said:

am building my own local server hosting company,

Why? Companies like Amazon, Google, and Microsoft will do this better than you in almost all cases. Better support, more secure, more system options, better uptime. Who would your customers be?

 

3 minutes ago, NPDPdev said:

Is going with servers so old a good idea? I

Those really aren't that old, there aren't many risks with a haswell/broadwell gen system.

 

4 minutes ago, NPDPdev said:

At how many servers should I probably buy a rack? I was thinking before I bought my 4th one, or by the time I needed to buy a large switch.

Your probably gonna be power and cooling limited here.  

 

Also check that your floor can handle the weight

 

5 minutes ago, NPDPdev said:

At what point should I buy a dedicated backup server,

What data are you backing up? For customers or your data? Normally for services like AWS, its not backed up by default, and you have to managed how a instance is backed up. Most of them are being auto deployed, so you don't need any backups at all.

 

6 minutes ago, NPDPdev said:

The servers have 10 2.5" drive bays, which would allow for me to run something like 8 2tb SSDs and 2 8tb HDDs. 

Id setup shared network storage. What hypervisor are you using? Probalby use something like storage spaces direct, vsan, ceph or others here.

 

 

 

What network connection do you have? You probably want the deticated internet connection(not cheap, think a few thousand a month for a gig link).

 

How are you doing IPs? IPv4s aren't cheap these days. 

 

What are you using for routing?

 

How will the users manage this? There are many setups for a diy cluster where they can fire up vms as needed and bills them as needed.

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×