Jump to content

Proper entreprise build for Zabbix Database Server

Hello world,

so here's my questions. Where I work, we've got a Zabbix server, proxy and a database server all running on different machines. Right now our bottleneck is mainly the database server, running on 4 x HDD drives at 10k RPM with a RAID Controller 410i on RAID 10 (1+0), we've got 1150 hosts, 36 658 items and 10 068 triggers enabled and working. We are working with Ubiquiti antennas aswell as our Zabbix servers, and currently looking to upgrade our database server since as I said before it is the bottleneck in our configuration. We would like to have around 1TB of disk-space, considering this and our RAID setup, it'd need us 4 disk drives. As a company the budget is tolerable but no too crazy. I doubt buying a 10k$ server would be the best of ideas.

A rater interesting used server which, has the configuration as listed below, is sold for $506.00 US has come to our minds.

Chassis: HP ProLiant DL360 G7 8-Port Chassis (150$)
CPU: 2 x GHz Hex-Core (6 cores, 12 threads) Intel Xeon Processor With 12MB Cache -- X5670 (2 x 60$)
RAM: 64GB Memory Upgrade Kit -- 8 x 8GB 2RX4 PC3-10600R -- 1333MHz (216$)
RAID: Smart Array P410i 1GB FBWC 0-60 RAID (20$)
PSU: (2) HP G7 G8 460 Watt PSU


And for the storage, well, this is quite the question, since the servers have always been using 10k RPM HDD we haven't tested anything using SSDs. So what would you recommend using as for SSDs, and is the configuration listed above any good for our purposes, what would you change and why? If you guys have any other question, let me know and i'll edit the post. Other than that, if someone has a better build or RAID Controller or really anything, that would perform better let us know because we want to take our time buying the right system that will fits our demands.

Link to comment
Share on other sites

Link to post
Share on other sites

How much do you care about uptime and how much do you want to spend? Config above is fine for the price, but no support and its old hardware.

 

For storage, id go ssds. If you geto money, get something like a s4510, if you want cheaper get a used s3500 or a midrange consumer drive like a 860 evo. These will smoke a 10k hdd in performnace. ~300 iops max vs >30000 iops.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, Electronics Wizardy said:

How much do you care about uptime and how much do you want to spend? Config above is fine for the price, but no support and its old hardware.

 

For storage, id go ssds. If you geto money, get something like a s4510, if you want cheaper get a used s3500 or a midrange consumer drive like a 860 evo. These will smoke a 10k hdd in performnace. ~300 iops max vs >30000 iops.

What is the difference between the s4500 series and the s3500 series? As I can see on Intel's website, the s4500 is actually cheaper than the s3500 series which seems a bit weird. Can the RAID Controller card P410i be a bottleneck with SSDs?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Sycos said:

What is the difference between the s4500 series and the s3500 series? As I can see on Intel's website, the s4500 is actually cheaper than the s3500 series which seems a bit weird. Can the RAID Controller card P410i be a bottleneck with SSDs?

what os are you running? Id go software raid probably here with a hba, not a raid card, but depends on the os.

 

The s3500 is a older model, older flash. Only buy it used and cheap. the s4500 has better endurance and speed.

Link to comment
Share on other sites

Link to post
Share on other sites

We are running under FreeBSD, isn't a software raid always worst than a real RAID? What if the servers runs out of electricity and the backup batteries don't work? Is all of your data going to be corrupted, I know that this is why RAID Card with batterie included is always best to have, but what about software raid? I've always been told it's never nearly as powerful than a physical raid.

Edited by Sycos
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Sycos said:

We are running under FreeBSD Linux Distrubition, isn't a software raid always worst than a real RAID? What if the servers runs out of electricity and the backup batteries don't work? Is all of your data going to be corrupted, I know that this is why RAID Card with batterie included is always best to have, but what about software raid? I've always been told it's never nearly as powerful than a physical raid.

freebsd or linux? big difference here.

 

Look at ZFS, its normally better than hardware raid. Has a slog drive for poiwer failure, or uses sync correctly without one, Very good and not corrupting data.

Link to comment
Share on other sites

Link to post
Share on other sites

Ok thank you for the quick answers! I'll look up about ZFS and slog drives as I've never heard of neither of them.

Link to comment
Share on other sites

Link to post
Share on other sites

If you are using all enterprise SSDs then you don’t need an SLog device, or the only worthwhile one would be something like Optane. Slog is for securely saving sync writes to a very fast medium before they are written to a slower medium like a bunch of hard drives. Without a dedicated SLog device, ZFS will write any sync data to a small part of each disk, which means if you have an all-SSD array its usually fast enough (either way it will do a double-write, once to the log area and then to the proper storage). Whether you have an SLog device or not, the risk of corruption with ZFS is substantially lower than with hardware RAID, even with a battery backup on the controller. The only caveat is that you should only use ZFS if the OS can fully see each drive, like with an HBA. Some RAID cards pass through unconfigured disks nicely, but if you have to do something in the settings like setting a drive to JBOD mode then it is not safe to use with ZFS.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

First Im kinda curious why Zabbix over the alternatives (nagios/centreon/solar winds/etc..) ? 

 

Do the pollers, proxy, & database servers all communicate on a dedicated or shared network?

Is this 100Mb, 1Gb, or 10Gbps ?

 

 

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/13/2018 at 12:57 PM, Sycos said:

We are running under FreeBSD, isn't a software raid always worst than a real RAID? What if the servers runs out of electricity and the backup batteries don't work? Is all of your data going to be corrupted, I know that this is why RAID Card with batterie included is always best to have, but what about software raid? I've always been told it's never nearly as powerful than a physical raid.

Your assumptions are about 10 years out of date.

 

FreeBSD's default filesystem is UFS, ZFS is a common and highly used option however. If your on FreeBSD, there are reasons to use UFS.. but ZFS is a very strong and compelling option.

 

"isn't a software raid always worst than a real RAID?" - Maybe depending. ZFS based, no.

"What if the servers runs out of electricity?" - In ZFS's case nothing as it's transactional. Most other OS's filesystems are journaled so they can also recover, a situation can arise on a cheep hardware controller where it lies to the OS and tells it that the write was complete when it was not actually flushed to disk that can affect the journaled file systems.

 

ZFS should have special permissions set for the database's dataset to avoid a situation where it is double caching and doing excessive writes. (in other words it should have it's own mount point in the pool for the database itself and it's own options set) Look to find what database your using, there are good guides out there. The database may also have to be set to maintenance mode to do a proper snapshot. You'll want to make sure any documentation you use for this this applies to FreeBSD and not to Oracle Solaris. This could have a huge impact on performance and I suspect it may be your problem. ZFS applies new options to files upon write so you may need to move the database off it and back on to change the options.

 

A general rule of thumb here is set recordsize=8K and primarycache=metadata but they vary..

 

FreeBSD's manual is a very good resource. It's probably better quality than Arch Linux's oft spoke of wiki.

 

@brwainer Is correct that ZFS wants to see the raw volume if you use it in addition with a raid controller.

 

Netdata is a pretty slick monitor.. tho.. not really industry accepted. Works on FreeBSD tho. http://my-netdata.io/#demosites

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×