Jump to content

New small office server aiming for speed first, bulk second.

FAHEYGF87

Budget (including currency): $3,000 USD. Includes Server, 4 post rack, and rack mounted UPC.

Country: USA

 

ASUS AM4 TUF Gaming X570 (purely for the extra PCIe lanes)

Ryzen 7 3700X

Noctua NH-D15

16GB Crucial Ballistix 3600(2x8gb)

Zotac Geforce GT 710 1gb (just to have an output)

Seasonic FOCUS Plus 750 Gold

 

Samsung 980 Pro 250GB X6(2 in raid 1 on the motherboard(OS), 4 in raid 1/0 on the High Point card)

High Point SSD7505

Seagate IronWolf 8tb NAS X2(Raid 1. For nightly backups)

 

Rosewill 4U Server (RSV-L4500)

Noctua NF-F12 iPPC 3000rpm 120mm X6

 

Rough total - ~ $2,800

 

__________________________________________________________________________

 

Other stuff to go along with this project:

 

Raising Electrongs Server Rack 4 Post 15U 36 inch Height 

 

Rails

 

APC 1500VA Smart UPS SMC1500-2UC

 

Rough total - ~ $800

 

 

GRAND TOTAL $3,600

 

__________________________________________________________________________

 

We already have a 10gb switch, 10gb NIC on the current server that will move to the new one, and 5gb NIC on each of the work stations.

 

__________________________________________________________________________

 

Being that this will be my first rack build, does anybody see anything obviously wrong with it or any glaring omissions that I may have made?

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Why get 4 ssds in raid 0? Your gonna be network limited anyways, Id just get a single 1tb ssd.

 

What os do you plan on using?

 

Id personally just use a single cheaper boot drive, speed won't matter, and sds don't fail that often.

 

6 minutes ago, FAHEYGF87 said:

, and 5gb NIC on each of the work stations.

Id just go 10gbe on the workstations, price is about the same.

 

 

Id personally just get a used rack server. Much better build quality, and cheaper. And many more pcie lanes. Id get something like a used dell r720 or r730 if this was my setup.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Electronics Wizardy said:

Why get 4 ssds in raid 0? Your gonna be network limited anyways, Id just get a single 1tb ssd.

 

What os do you plan on using?

 

Id personally just use a single cheaper boot drive, speed won't matter, and sds don't fail that often.

 

Id just go 10gbe on the workstations, price is about the same.

 

 

Id personally just get a used rack server. Much better build quality, and cheaper. And many more pcie lanes. Id get something like a used dell r720 or r730 if this was my setup.

Yes, I do wish i'd have gone for the 10gb from the get go.

 

I had considered getting a refurb'd prebuilt. Do you think the read speeds would be as good though? I guess its sort of irrelevant if the bottleneck is going to still be the NIC, even at 10gb. That might be a good suggestion.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, FAHEYGF87 said:

Yes, I do wish i'd have gone for the 10gb from the get go.

 

I had considered getting a refurb'd prebuilt. Do you think the read speeds would be as good though? I guess its sort of irrelevant if the bottleneck is going to still be the NIC, even at 10gb. That might be a good suggestion.

You don't need much to copy files over the network at 10g. I have a old dual xeon e5 2680 v2 system that can fill 10gbe from a nvme ssd without issues.

 

What are your iops requrements? 500gb seems low here, as its gonna be filled so fast. I got a 3.2tb ssd for simmilar uses and its working well.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Electronics Wizardy said:

You don't need much to copy files over the network at 10g. I have a old dual xeon e5 2680 v2 system that can fill 10gbe from a nvme ssd without issues.

 

What are your iops requrements? 500gb seems low here, as its gonna be filled so fast. I got a 3.2tb ssd for simmilar uses and its working well.

I'm currently using a 500gb crucial P1, and its only about half full. The actual database is only about 100gb of total size. I'm just looking for a solution to reduce the loading times for customer data. I absolutely cannot stand it when you've got a customer on the phone and you've got to wait for their account information to load. 

 

I'll spare you the details, but some of these accounts have up to 100 address in them, each address with its own details and route pictures/route events, some have video clips, some have data spanning back to 1999. All of that has to be loaded on screen before you can do anything within the account.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, FAHEYGF87 said:

I'm currently using a 500gb crucial P1, and its only about half full. The actual database is only about 100gb of total size. I'm just looking for a solution to reduce the loading times for customer data. I absolutely cannot stand it when you've got a customer on the phone and you've got to wait for their account information to load. 

 

I'll spare you the details, but some of these accounts have up to 100 address in them, each address with its own details and route pictures/route events, some have video clips, some have data spanning back to 1999. All of that has to be loaded on screen before you can do anything within the account.

Is this database being accessed by all the workstations at once? 

 

It seems cheaper and easier to just put the ssds in the workstations.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Electronics Wizardy said:

Is this database being accessed by all the workstations at once? 

 

It seems cheaper and easier to just put the ssds in the workstations.

Yes.

 

They all have SSD's in them already, but they are all reading and writing to the database during every integration with customers. Centralized storage is a must.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, FAHEYGF87 said:

Yes.

 

They all have SSD's in them already, but they are all reading and writing to the database during every integration with customers. Centralized storage is a must.

How much bandwidth vs iops are you using? 

 

Just saying, noramally you only have one program using the data base at once, and then the clients talk to the servers. Normally you don't want multiple clients touching one database files on shared storage. just make sure this will work fine.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×