I have used lvm successfully for my high end Linux systems that I've built from scratch using SSDs (boot, OS, specific databases) and secondary storage HDDs(large capacity data storage, running programs). I've been studying SSD caching for my secondary HDDs configured using RAID 10 (I always use RAID 10 or RAID 1 at the hardware level, not at the OS level).
I'm surprised how little up-to-date (within the last 12 months) information is available on lvmcache from people who have implemented SSD caching successfully. Also, information on the associated read and write performance benefits once they got their SSD cache working properly (optimally) with lvmcache.
Most of the information on setup/configuration of their SSD cache using lvmcache that people have shared publicly is old, I just want to do some more research based on up-to-date information first before getting it set up in the lab.
The lack of information is also a bit worrying that I'm missing something (does the caching algo work well, etc?). You'd think with the attainable price of high end 64GB to 250GB SSDs these days (ideal price and size wise for SSD caching), more people that have very large storage requirements for their Linux servers and/or workstations would have SSD caching set up to help speed up read and write access to their HDDs. lvmcache has been upstreamed in the Linux kernel for a while now and it seems not many are interested in it given the lack of up-to-date public information.
I use M.2 NVMe SSDs with my SATA or SAS HDDs
If anyone has a recent instruction/configuration set and performance data for implementing SSD caching using lvmcache for their HDDs on their Linux system, it would be greatly appreciated if you can share your detailed information.
Thanks for your help.