Jump to content

SSD in a NAS ("Special Case") Questions

Takuan

Hi guys.

 

I need your thoughts on a matter that I cannot find anywhere else on the internet.

 

I have researched a lot about SSDs in a NAS, but I cannot really see to find the answers I need. When having an SSD in a NAS the consensus all over seems to be, that this is a bad idea, unless you actually put in the data center drives specifically made for this use. What is "never" explained, or is simply assumed but never mentioned, is that this info is only "viable" in case the use scenario is some kind of RAID. In most cases, a NAS is populated with HDDs because of the "reliability" and the lower costs and higher capacity (bang for buck). I get all that, and I am actually myself following that way, and I have only HDDs in my NASs at the moment. In order to have some speed and performance increase, the general idea is to put an SSD cache either read or read/write into the NAS, in order to speed up especially random r/w of files often used. All this I am fully aware of, and I think that if I research more about these particular things, my eyes will turn square shape without getting any closer to finding the answers I need. Thus I hope you may be able to help.

 

What is never explained, and never discussed in any information or white papers I have been able to find the past week, is the different scenarios of having SSD inside a NAS as basic discs (no RAID of any kind). In the case of performance, I cannot see the difference between an SSD cache and just a plain SSD, as both setups would perform exactly the same (as long as the file in question is already on the SSD cache). In my particular "special" scenario I have been thinking about having an SSD setup as a basic disk inside the NAS, installing all apps/VMs etc. and other often read and used applications/files on there. Meaning that reading is main, writing is only when files are changed or new apps are installed or already installed apps are updated etc. This use scenario would perhaps not be much or any different from having the same SSD installed in any other "normal" non-NAS computer. Except one is called NAS/Server, the other is called PC.

 

The reason for having an SSD rated for NAS and data center use is due to the Power Loss Protection and other technical stuff, and I get that, compared to a "normal" consumer SSD that has none of these features. Generally speaking.

 

All warnings toward having a consumer (even the best quality one) SSD in a NAS seems to have the consensus that it is just a bad idea in any and all scenarios. But is it really? I am of course thinking about the scenario mentioned above, having an SSD installed as a basic disk in the NAS without any RAID whatsoever, simply using it for having all apps/VMs installed on it and run from there. No SSD cache is needed, as everything on the SDD is already "cached" in terms of performance. Having an SSD cache installed in this scenario would just be a waste of SSDs as I understand it.

 

What are your thoughts and your experience with such a setup? The question for this post is the fact, that enterprise SSDs are very expensive, actually in many cases between 50-80% more expensive than the consumer "equivalent". If the technical specs that makes it a DC SSD or a NAS SSD is unnessessary in the above scenario and will make no difference when the SSD is used as a basic disk only, then I would assume that saving the cost on a consumer SSD vs the DC/NAS SSD would be an extremely good argument. In the case of the Samsung SSD 860 (brand mentioned here is just an example) consumer and samsung 860 dtc models, I found that the only difference between the drives are the firmware. In this case, I cannot see why I should pay the higher cost for the dtc model, instead of the consumer model and thus save some money. Given of course, that the use is as a basic disk without any RAID of course.

 

I am not concerned with redundancy of the SSD in the NAS, as I am already planning on doing backup to an external SSD used solely for backing up the data from the SSD in the NAS. So SSD redundancy is of no concern in this scenario.

 

Any and all thoughts, ideas and warnings you may have to this specific scenario is most welcome.

Thank you very much.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know much about it, but can't any power loss issues I don't know much about, be worked around by having an UPS that allows it to shut down normally?

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Takuan said:

When having an SSD in a NAS the consensus all over seems to be, that this is a bad idea, unless you actually put in the data center drives specifically made for this use.

That depends on the scenario, which SSD's and which RAID configuration we're talking about. 

As its been discussed in here before, you should avoid using multiple SSD's on older hardware RAID due to lack of TRIM. Also it depends on your use case if your addition of SSD's will make your transfers suffer. If you have very larges that you're transferring that are larger than the SLC cache on the SSD, then performance can really suffer. When the buffer has been exhausted, quite often TLC/QLC (many of the cheaper consumer SSD's) are slower than spinning hard drives. 

 

2 hours ago, Takuan said:

What is "never" explained, or is simply assumed but never mentioned, is that this info is only "viable" in case the use scenario is some kind of RAID. In most cases, a NAS is populated with HDDs because of the "reliability" and the lower costs and higher capacity (bang for buck). I get all that, and I am actually myself following that way, and I have only HDDs in my NASs at the moment. In order to have some speed and performance increase, the general idea is to put an SSD cache either read or read/write into the NAS, in order to speed up especially random r/w of files often used. All this I am fully aware of, and I think that if I research more about these particular things, my eyes will turn square shape without getting any closer to finding the answers I need. Thus I hope you may be able to help.

Yup you got the right idea there, that smaller Random R/W will be much improved with an SSD cache

 

2 hours ago, Takuan said:

What is never explained, and never discussed in any information or white papers I have been able to find the past week, is the different scenarios of having SSD inside a NAS as basic discs (no RAID of any kind). In the case of performance, I cannot see the difference between an SSD cache and just a plain SSD, as both setups would perform exactly the same (as long as the file in question is already on the SSD cache).

This is for the most part true, as above though...have you tried transferring very large files (e.g 20GB+) on SSD's? Many SSD's you will see a massive slow down once you get past the 8-12GB file size as the buffer is exhausted. Also if we're talking about a hardware RAID configuration with SSD's on a controller that doesnt support TRIM, then that buffer often may not clear properly, causing you continuous slow performance until that ssd is erased. 

 

2 hours ago, Takuan said:

In my particular "special" scenario I have been thinking about having an SSD setup as a basic disk inside the NAS, installing all apps/VMs etc. and other often read and used applications/files on there. Meaning that reading is main, writing is only when files are changed or new apps are installed or already installed apps are updated etc. This use scenario would perhaps not be much or any different from having the same SSD installed in any other "normal" non-NAS computer. Except one is called NAS/Server, the other is called PC.

That's a perfect use case for an SSD, SSD's really shine for application, operating system and anything with lots of small random IO. 

 

2 hours ago, Takuan said:

The reason for having an SSD rated for NAS and data center use is due to the Power Loss Protection and other technical stuff, and I get that, compared to a "normal" consumer SSD that has none of these features. Generally speaking.

It depends on the SSD, yes some have power loss protection, also many are MLC or at the very expensive end SLC which is considerably faster NAND. The NAND used to create the small buffer in most consumer SSD's. Ever wonder why the Samsung 970 Pro is so much more expensive than the 970 EVO? Because the Pro is an MLC NAND SSD, while the Evo is TLC NAND. 

 

2 hours ago, Takuan said:

All warnings toward having a consumer (even the best quality one) SSD in a NAS seems to have the consensus that it is just a bad idea in any and all scenarios. But is it really? I am of course thinking about the scenario mentioned above, having an SSD installed as a basic disk in the NAS without any RAID whatsoever, simply using it for having all apps/VMs installed on it and run from there. No SSD cache is needed, as everything on the SDD is already "cached" in terms of performance. Having an SSD cache installed in this scenario would just be a waste of SSDs as I understand it.

SSD's as I said above for VM's/App's are a great idea. Many of us at home SSD's can be a great idea because we often have small mixed data like TV Shows, Photos, Video projects, Music, etc....but when you start looking at businesses that deal with very large files like Science datasets which can be 100's of GB's, large raw video files that can be 50GB+, database files which can easily be 100's of GB's, etc....thats when you need to consider the type of SSD's you use. Personally for home use I don't see a problem using a consumer SSD's for cache (though if we're talking ZFS then SLOG/ZIL is a totally different story), but at the same time its a waste of money for 99% of people that dont have a 2Gbit+ network. 

 

2 hours ago, Takuan said:

What are your thoughts and your experience with such a setup? The question for this post is the fact, that enterprise SSDs are very expensive, actually in many cases between 50-80% more expensive than the consumer "equivalent". If the technical specs that makes it a DC SSD or a NAS SSD is unnessessary in the above scenario and will make no difference when the SSD is used as a basic disk only, then I would assume that saving the cost on a consumer SSD vs the DC/NAS SSD would be an extremely good argument. In the case of the Samsung SSD 860 (brand mentioned here is just an example) consumer and samsung 860 dtc models, I found that the only difference between the drives are the firmware. In this case, I cannot see why I should pay the higher cost for the dtc model, instead of the consumer model and thus save some money. Given of course, that the use is as a basic disk without any RAID of course.

I think your setup idea sounds just fine, I use SSD's in a ZFS Mirror Raid for my VM's and it works fantastic, far better than spinning hard drives. 

But if you were to want to setup an SSD cache for an actual array, honestly disregarding interface (SATA or NVMe), id always recommend to go an MLC based SSD. Otherwise whenever you do those large transfers you're going to quickly see drops to well below what your hard drives could do without cache. 

 

2 hours ago, Takuan said:

I am not concerned with redundancy of the SSD in the NAS, as I am already planning on doing backup to an external SSD used solely for backing up the data from the SSD in the NAS. So SSD redundancy is of no concern in this scenario.

Depending on your setup you could possibly just automate a backup of your SSD to your Array using something like RSync which can run in archive mode to do incremental copy. Depends on if it can access the files ok though. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Taking into account current consumer-grade SSD prices I would go with SSD in RAID5 just because you get almost the same $/GB ratio compared to RAID10 of HDDs. Since you are going RAID5 and SSDs do not tend to die immediately (like HDDs that have failing mechanical parts) and their S.M.A.R.T is pretty predictive you will always have enough time to get a spare drive and rebuild the array if needed. I am talking RAID5 meaning either hardware RAID or software analogs like ZFS or MDADM.

 

I am comparing this setup with RAID10 of HDDs for one simple reason which is that with modern high-capacity HDDs, RAID5 becomes catastrophically unreliable, because the chances of double failure become very real during rebuild time https://www.starwindsoftware.com/blog/raid-5-was-great-until-high-capacity-hdds-came-into-play-but-ssds-restored-its-former-glory-2.

 

The only issue you might have with an SSDs packed NAS is the network connectivity that might bottleneck your storage performance since even slowest SSDs in a parity array will easily give you around 20k random write IOps.

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you all for your input.

 

I am particularly looking for input in regard to having an SSD installed in a NAS and setup as a basic disk where apps/VMs etc. can be installed without any RAID on the SSD. The HDDs in the NAS will be setup in a RAID6 used for storage/data only (no apps or VMs running off of the HDD RAID6 volume). With this setup I aim to make and SSD cache obsolete as the SSD cache would not be able to perform better than the basic disk SSD itself anyway. In this scenario with the SSD installed as a basic disk with no RAID whatsoever, would a consumer grade SSD be fine and perhaps "preferred" (cost/performance) or is an enterprise/NAS or data center SSD always preferred/recommended when installed in a NAS no matter which way the SSD is setup (as basic disk or as part of a RAID) due to the extra features such as Power Loss Protection etc., or are those features irrelevant when the SSD is setup as a basic disk? I would assume that an SSD in a NAS setup as a basic disk would behave exactly the same as an SSD installed in any personal computer, thus making it no more and no less important which model/features the SSD would have when installed in a NAS?

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Takuan said:

Thank you all for your input.

 

I am particularly looking for input in regard to having an SSD installed in a NAS and setup as a basic disk where apps/VMs etc. can be installed without any RAID on the SSD. The HDDs in the NAS will be setup in a RAID6 used for storage/data only (no apps or VMs running off of the HDD RAID6 volume). With this setup I aim to make and SSD cache obsolete as the SSD cache would not be able to perform better than the basic disk SSD itself anyway. In this scenario with the SSD installed as a basic disk with no RAID whatsoever, would a consumer grade SSD be fine and perhaps "preferred" (cost/performance) or is an enterprise/NAS or data center SSD always preferred/recommended when installed in a NAS no matter which way the SSD is setup (as basic disk or as part of a RAID) due to the extra features such as Power Loss Protection etc., or are those features irrelevant when the SSD is setup as a basic disk? I would assume that an SSD in a NAS setup as a basic disk would behave exactly the same as an SSD installed in any personal computer, thus making it no more and no less important which model/features the SSD would have when installed in a NAS?

I'll repeat what I said above but summarize it. Yes a standard consumer SSD is just fine for your case. Regardless of wether theyre MLC, TLC or QLC and wether the interface is SATA or NVMe, consumer SSD's are still high IO and low latency for random RW and by far outperform mechanical hard drives for application and operating system use.

 

As I already mentioned Enterprise SSDs are really more important when it comes to very large files (datasets) or for their power protection features when it comes to something like ZFS's ZIL. Also they have specialised drives like Write Intensive & Read Intensive. But none of this stuff matters for your application. 

 

I just use cheap Crucial P1 1TB NVMe drives (QLC) for my VM's and they run quite happily. Theyre just general purpose VM's so they have all my VM's disks on them.

 

For my storage array I use an Aorus Gen4 NVMe which is TLC, however it has a DRAM buffer and my largest files are about 28GB which are rare so for home use it works perfectly fine for my purpose being a 500GB cache.  In my case I like to have the cache because all my new incoming data goes there, so my hard drives stay spun down for most of the day unless i'm reading from that drive. Saves power, reduces heat, and unnecessary spinning of the hard disks. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you Jarsky. That was exactly the information I was hoping for. Thank you very much for taking your time to also elaborate on it. I very much appreciate it.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×