Jump to content

SSD or NVME for MYSQL Database

M Raheel Asghar
Go to solution Solved by MindServ,
2 hours ago, raheel syntecx said:

Are you suggesting not to with SSD's for database?

Well I think you should use RAID, preferably RAID 1 or 10. But to make it easier an less expensive over time a great way to implement it are to use one set of enterprise SLC SSD's for log and tempdb stuff and another set of, perhaps cheaper consumer grade MLC SSD's, for data and index. (Since data and index aren't that write intensive)

 

You can also use RAID 5 or 6 to spread the writes over several SSD's, though you loose some of the speed of RAID 1 or 10.

 

Whichever solution you choose, make sure you do replace the SSD's way ahead of their expected lifetime, at least some of the drives, and of course have a well configured backup! Though I'm sure you are way ahead of me on that one! :) 

 

Edit: I was thinking MS SQL, but it's pretty much the same with MySQL.

Hi,

 

I need suggestions whether to use SSD or NVME for MYSQL Database. My Database is around 500 GB (will increase with the passage of time) having around 700 tables in it. I am running a service where ebills are sent to customers and email events (attempted/delivered/bounce/drop) in return sometimes choke server also different SMS services are running on the server. I have Enterprise HDD 7200 rpm in the system and looking to upgrade it but stuck on decision whether to go for SSD or NVME. I tried googling it but haven't found any clear answer if NVME will have enough performance increase over SSD in case of MYSQL.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MindServ said:

Du you mean NVMe or SATA drive?

I meant SATA SSD vs NVME SSD

Link to comment
Share on other sites

Link to post
Share on other sites

NVMe SSDs are objectively faster than SATA SSDs. Considering how lightweight MYSQL is (iirc), the increased speed likely won't give you much improvement, if any.

 

I would suggest you get a 1TB (or 2TB) SATA SSD. They are much cheaper than their NVMe counterparts and should also have a very good amount of write endurance. Another option is to use something like primocache with a small SSD on your Enterprise HDD; this won't be as fast as an outright SSD, but could give you a massive improvement in capacity with a fairly reliable amount of speed.

Primary PC: - https://pcpartpicker.com/list/8G3tXv (Windows 10 Home)

HTPC: - https://pcpartpicker.com/list/KdBb4n (Windows 10 Home)
Server: Dell Precision T7500 - Dual Xeon X5660's, 44GB ECC DDR3, Dell Nvidia GTX 645 (Windows Server 2019 Standard)      

*SLI Rig* - i7-920, MSI-X58 Platinum SLI, 12GB DDR3, Dual EVGA GTX 260 Core 216 in SLI - https://pcpartpicker.com/list/GHw6vW (Windows 7 Pro)

HP DC7900 - Core 2 Duo E8400, 4GB DDR2, Nvidia GeForce 8600 GT (Windows Vista)

Compaq Presario 5000 - Pentium 4 1.7Ghz, 1.7GB SDR, PowerColor Radeon 9600 Pro (Windows XP x86 Pro)
Compaq Presario 8772 - Pentium MMX 200Mhz, 48MB PC66, 6GB Quantum HDD, "8GB" HP SATA SSD adapted to IDE (Windows 98 SE)

Asus M32AD - Intel i3-4170, 8GB DDR3, 250GB Seagate 2.5" HDD (converting to SSD soon), EVGA GeForce GTS 250, OEM 350W PSU (Windows 10 Core)

*Haswell Tower* https://pcpartpicker.com/list/3vw6vW (Windows 10 Home)

*ITX Box* - https://pcpartpicker.com/list/r36s6R (Windows 10 Education)

Dell Dimension XPS B800 - Pentium 3 800Mhz, RDRAM

In progress projects:

*Skylake Tower* - Pentium G4400, Asus H110

*Trash Can* - AMD A4-6300

*GPU Test Bench*

*Pfsense router* - Pentium G3220, Asrock H97m Pro A4, 4GB DDR3

Link to comment
Share on other sites

Link to post
Share on other sites

Other people will probably come in and correct me but from my understanding here it's more about latency than raw throughput. A SATA SSD would be able to respond to many more "simultaneous" (or quick succession) requests vs a physical needle that has to move across a platter (particularly if this is not a RAID). You should already see a great performance increase going up to SATA SSD. Now unless your workload really warrants it I can't expect the only slightly even faster latency NVMe standard drive to make a big enough difference to warrant the additional cost since this could potentially require a new motherboard (even to just use it as cache).

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Windows7ge said:

Other people will probably come in and correct me but from my understanding here it's more about latency than raw throughput. A SATA SSD would be able to respond to many more "simultaneous" (or quick succession) requests vs a physical needle that has to move across a platter (particularly if this is not a RAID). You should already see a great performance increase going up to SATA SSD. Now unless your workload really warrants it I can't expect the only slightly even faster NVMe standard drive to make a big enough difference to warrant the additional cost since this could potentially require a new motherboard (even to just use it as cache).

Enterprise HDD 7200 rpm is already running in Raid and new SSD whether SATA or NVME will also be running in Raid. I am upgrading my whole server but stuck on storage decision i.e. SATA SSD vs NVME SSD for Database.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Eastman51 said:

NVMe SSDs are objectively faster than SATA SSDs. Considering how lightweight MYSQL is (iirc), the increased speed likely won't give you much improvement, if any.

 

I would suggest you get a 1TB (or 2TB) SATA SSD. They are much cheaper than their NVMe counterparts and should also have a very good amount of write endurance. Another option is to use something like primocache with a small SSD on your Enterprise HDD; this won't be as fast as an outright SSD, but could give you a massive improvement in capacity with a fairly reliable amount of speed.

I have read the same googling it, this is what got me confused, if going for NVME SSD will have real world performance gain over SATA SSD in terms of database and will justify the price jump.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, raheel syntecx said:

Enterprise HDD 7200 rpm is already running in Raid and new SSD whether SATA or NVME will also be running in Raid. I am upgrading my whole server but stuck on storage decision i.e. SATA SSD vs NVME SSD for Database.

As cache or if you're just going full blown pure solid state storage pool if you only have 700 tables I think SATA is your better option. More cost effective. Still significantly better latency over mechanical storage. Easier to expand. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, raheel syntecx said:

I have read the same googling it, this is what got me confused, if going for NVME SSD will have real world performance gain over SATA SSD in terms of database and will justify the price jump.

NVMe probably won't give you much improvement.

NVMe and SATA SSD boot drives boot in near identical times, both being mainly limited by the OS, the RAM, and the CPU. 

 

If you had a gigantic database, with hundreds of thousands of users near 24//7 (kinda like a big web/streaming server), then NVMe would have a better chance of giving you more; but for a small database with not many users, SATA SSD + HDD for caching or a SATA SSD would be perfectly fine. 

Primary PC: - https://pcpartpicker.com/list/8G3tXv (Windows 10 Home)

HTPC: - https://pcpartpicker.com/list/KdBb4n (Windows 10 Home)
Server: Dell Precision T7500 - Dual Xeon X5660's, 44GB ECC DDR3, Dell Nvidia GTX 645 (Windows Server 2019 Standard)      

*SLI Rig* - i7-920, MSI-X58 Platinum SLI, 12GB DDR3, Dual EVGA GTX 260 Core 216 in SLI - https://pcpartpicker.com/list/GHw6vW (Windows 7 Pro)

HP DC7900 - Core 2 Duo E8400, 4GB DDR2, Nvidia GeForce 8600 GT (Windows Vista)

Compaq Presario 5000 - Pentium 4 1.7Ghz, 1.7GB SDR, PowerColor Radeon 9600 Pro (Windows XP x86 Pro)
Compaq Presario 8772 - Pentium MMX 200Mhz, 48MB PC66, 6GB Quantum HDD, "8GB" HP SATA SSD adapted to IDE (Windows 98 SE)

Asus M32AD - Intel i3-4170, 8GB DDR3, 250GB Seagate 2.5" HDD (converting to SSD soon), EVGA GeForce GTS 250, OEM 350W PSU (Windows 10 Core)

*Haswell Tower* https://pcpartpicker.com/list/3vw6vW (Windows 10 Home)

*ITX Box* - https://pcpartpicker.com/list/r36s6R (Windows 10 Education)

Dell Dimension XPS B800 - Pentium 3 800Mhz, RDRAM

In progress projects:

*Skylake Tower* - Pentium G4400, Asus H110

*Trash Can* - AMD A4-6300

*GPU Test Bench*

*Pfsense router* - Pentium G3220, Asrock H97m Pro A4, 4GB DDR3

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Windows7ge said:

As cache or if you're just going full blown pure solid state storage pool if you only have 700 tables I think SATA is your better option. More cost effective. Still significantly better latency over mechanical storage. Easier to expand. 

I shall shift my whole Database to new SSD drives running in Raid. I won't use mechanical drives in new system.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, raheel syntecx said:

I shall shift my whole Database to new SSD drives running in Raid. I won't use mechanical drives in new system.

Yeah, if you need/want easy future expansion SATA SSD's is your better bet. You can only fit so many M.2 or PCI_e SSD's in a system and then you're out of physical slots while a RAID of SATA SSD's can help achieve a close to similar latency response if it's big enough and have ports to spare for more.

Link to comment
Share on other sites

Link to post
Share on other sites

One more thing to consider is the fact that you probably should use a enterprise grade SSD for databases.

I've had a few Intel consumer SSD's die after less then a year when used as database storage, though it has been on servers with quite a lot of stuff going on.

(Used cheap consumer SSD's since it was only for testing purposes)

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, MindServ said:

One more thing to consider is the fact that you probably should use a enterprise grade SSD for databases.

I've had a few Intel consumer SSD's die after less then a year when used as database storage, though it has been on servers with quite a lot of stuff going on.

(Used cheap consumer SSD's since it was only for testing purposes)

I think that consumer SSDs should be fine, just get ones with high write endurance and make sure to run a redundant RAID for back if one of them does die.

Primary PC: - https://pcpartpicker.com/list/8G3tXv (Windows 10 Home)

HTPC: - https://pcpartpicker.com/list/KdBb4n (Windows 10 Home)
Server: Dell Precision T7500 - Dual Xeon X5660's, 44GB ECC DDR3, Dell Nvidia GTX 645 (Windows Server 2019 Standard)      

*SLI Rig* - i7-920, MSI-X58 Platinum SLI, 12GB DDR3, Dual EVGA GTX 260 Core 216 in SLI - https://pcpartpicker.com/list/GHw6vW (Windows 7 Pro)

HP DC7900 - Core 2 Duo E8400, 4GB DDR2, Nvidia GeForce 8600 GT (Windows Vista)

Compaq Presario 5000 - Pentium 4 1.7Ghz, 1.7GB SDR, PowerColor Radeon 9600 Pro (Windows XP x86 Pro)
Compaq Presario 8772 - Pentium MMX 200Mhz, 48MB PC66, 6GB Quantum HDD, "8GB" HP SATA SSD adapted to IDE (Windows 98 SE)

Asus M32AD - Intel i3-4170, 8GB DDR3, 250GB Seagate 2.5" HDD (converting to SSD soon), EVGA GeForce GTS 250, OEM 350W PSU (Windows 10 Core)

*Haswell Tower* https://pcpartpicker.com/list/3vw6vW (Windows 10 Home)

*ITX Box* - https://pcpartpicker.com/list/r36s6R (Windows 10 Education)

Dell Dimension XPS B800 - Pentium 3 800Mhz, RDRAM

In progress projects:

*Skylake Tower* - Pentium G4400, Asus H110

*Trash Can* - AMD A4-6300

*GPU Test Bench*

*Pfsense router* - Pentium G3220, Asrock H97m Pro A4, 4GB DDR3

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Eastman51 said:

I think that consumer SSDs should be fine, just get ones with high write endurance and make sure to run a redundant RAID for back if one of them does die.

Yeah, it's probably fine if you use drives with high write endurance.

Though RAID is tricky with SSD's since the writes usually are more or less the same on each drive, which can lead to drives that fail at almost the same time, or when the RAID tries to rebuild after a failed drive. (Depending on RAID type used)

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Acedia said:

500GB? Dude. Compress the database.

I am running a major service of a huge telecom operator with partial BI data in it, resulting in 500GB+ database

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, MindServ said:

Yeah, it's probably fine if you use drives with high write endurance.

Though RAID is tricky with SSD's since the writes usually are more or less the same on each drive, which can lead to drives that fail at almost the same time, or when the RAID tries to rebuild after a failed drive. (Depending on RAID type used)

Are you suggesting not to with SSD's for database?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, raheel syntecx said:

Are you suggesting not to with SSD's for database?

Well I think you should use RAID, preferably RAID 1 or 10. But to make it easier an less expensive over time a great way to implement it are to use one set of enterprise SLC SSD's for log and tempdb stuff and another set of, perhaps cheaper consumer grade MLC SSD's, for data and index. (Since data and index aren't that write intensive)

 

You can also use RAID 5 or 6 to spread the writes over several SSD's, though you loose some of the speed of RAID 1 or 10.

 

Whichever solution you choose, make sure you do replace the SSD's way ahead of their expected lifetime, at least some of the drives, and of course have a well configured backup! Though I'm sure you are way ahead of me on that one! :) 

 

Edit: I was thinking MS SQL, but it's pretty much the same with MySQL.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, MindServ said:

Well I think you should use RAID, preferably RAID 1 or 10. But to make it easier an less expensive over time a great way to implement it are to use one set of enterprise SLC SSD's for log and tempdb stuff and another set of, perhaps cheaper consumer grade MLC SSD's, for data and index. (Since data and index aren't that write intensive)

 

You can also use RAID 5 or 6 to spread the writes over several SSD's, though you loose some of the speed of RAID 1 or 10.

 

Whichever solution you choose, make sure you do replace the SSD's way ahead of their expected lifetime, at least some of the drives, and of course have a well configured backup! Though I'm sure you are way ahead of me on that one! :) 

 

Edit: I was thinking MS SQL, but it's pretty much the same with MySQL.

Thanks for helping me out.

Link to comment
Share on other sites

Link to post
Share on other sites

Depends on the load NVME drives have nearly the same QD1 4k performance.  I would just mirror (or RAID10) two PRO level SATA drives if it is critical.  You need a massive load to benefit from much more than that.   I wouldn't mess with R5/R6 on a SSD it adds complications.  Generally if we recommend servers with ("enterprise MLC") SSD drives its a mirror.  You can get a 1.92TB enterprise Samsung drive for less than $500!  Even Lenovo/IBMs outrageous prices for drives is comparable to their 10k/15k drives now.

 

BTW *IF* you needed much better than SATA QD1-4 performance I'd suggest optane..

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 years later...

For what it's worth.  I've setup two identical Dell R650 Servers, All OEM Drives.   Both running ESXI 8.0 with Database VM's with the same specs. Only difference is Server A) is using a single NVMe drive for storage,  and Server B) is using a PERC 755P card with four SAS-SSDs.   Server A is constantly experiencing app delays due to database lag, while B has had 0 complaints

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×