Jump to content

og1

Member
  • Posts

    4
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have used lvm successfully for my high end Linux systems that I've built from scratch using SSDs (boot, OS, specific databases) and secondary storage HDDs(large capacity data storage, running programs). I've been studying SSD caching for my secondary HDDs configured using RAID 10 (I always use RAID 10 or RAID 1 at the hardware level, not at the OS level). I'm surprised how little up-to-date (within the last 12 months) information is available on lvmcache from people who have implemented SSD caching successfully. Also, information on the associated read and write performance benefits once they got their SSD cache working properly (optimally) with lvmcache. Most of the information on setup/configuration of their SSD cache using lvmcache that people have shared publicly is old, I just want to do some more research based on up-to-date information first before getting it set up in the lab. The lack of information is also a bit worrying that I'm missing something (does the caching algo work well, etc?). You'd think with the attainable price of high end 64GB to 250GB SSDs these days (ideal price and size wise for SSD caching), more people that have very large storage requirements for their Linux servers and/or workstations would have SSD caching set up to help speed up read and write access to their HDDs. lvmcache has been upstreamed in the Linux kernel for a while now and it seems not many are interested in it given the lack of up-to-date public information. I use M.2 NVMe SSDs with my SATA or SAS HDDs If anyone has a recent instruction/configuration set and performance data for implementing SSD caching using lvmcache for their HDDs on their Linux system, it would be greatly appreciated if you can share your detailed information. Thanks for your help.
  2. Hi: I've gone through all the forums in detail with regards to the Intel Optane Memory (3D Xpoint) module (either the first generation PCIe NVMe 3.0 x2 or the new second generation M10) as an accelerator (cache) for a standard hard disk drive. I've got the required hardware setup knowledge and the install/configuration procedure down for using Intel Optane memory modules as an accelerator for a HDD on Windows 10. No problems there. That's what Intel has designed the 1st and 2nd generation M10 cache devices to do. I don't need any more information on Optane running on Windows 10 or another Windows OS. My application interest for the Optane memory technology is only as an accelerator (cache) for my secondary hard disk drive on LInux (to improve the read and write performance of my secondary HDD under Linux). I know the Optane 3D Xpoint accelerators (both 1st generation and second generation M10 Optane accelerators) can be used in other configurations on Linux, such as alternative to a M.2 NVMe SSD for a boot drive, etc. But I have no interest in those applications for Linux at this time. My question is straight. Has anyone been able to successfully install and stress test either the 1st generation Optane 32GB accelerator and/or the second generation Optane M10 32G or 64G accelerator on a host Linux system? Note: The specs on this 2nd generation M10 32G and 64G 3D xpoint accelerator are not posted on the Intel site yet, but it's available for pre-order on numerous online retail sites. If yes, do you have the detailed instructions to get the Optane memory working as an accelerator (cache) for a secondary HDD on Linux? That's all I'm interested in (as a cache for a secondary HDD on Linux) with regards to the Intel Optane memory technology (1st generation and/or the second generation M10). Thanks for your help.
×