Jump to content

AcquaCow

Member
  • Posts

    3
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

AcquaCow's Achievements

  1. This is the last thing I have in my notes from 2017 when I last dug into this: There are a few differences, but back then the tiering really didn't exist in the true sense of the term.
  2. The main issue with using Storage Spaces to "tier" your data and automatically do anything with it, is that Storage Spaces does not do that. Yes, you can put SSDs and HDDs together in Storage Spaces, but in order to "tier", you have to essentially run a scheduled script to move specific files up into the SSD "tier." This is fine for write once, read many, assuming you can wait till the script runs to promote the files, but if you are doing constant write/read operations, you have zero control of where those new writes land. There is no write caching and then reading from that cache (currently, that I'm aware of) in Storage Spaces Direct. Example: https://richardjgreen.net/pin-a-file-or-vhd-to-a-storage-space-tier/ I fought with this a lot, hoping to use all my leftover Fusion-io hardware with Storage Spaces to make a killer home NAS, but it performed worse than my FreeNAS VM, so I finally just settled on a large FreeNAS build instead and re-purposed the Fusion-io gear for hypervisor datastores. -- Dave
  3. I've done some PCI-e hot swap when I worked at Fusion-io, but there are caveats... 1) Have to unload the driver/kernel module before you pull the device 2) Have to have memory pre-allocated to be able to address any newly added cards 3) Driver has to be able to support unload/reload while the OS is running vs just at boot time. 4) Bios support also needed usually. The biggest issue is really just having support in the device drivers and having the driver notified of events. Newer stuff does this automatically, which is nice. 6-7 years ago, it was not as nice and required a lot of scripting in linux to tear down mdadm raid arrays spanned across 20-40TB of PCI-e flash.
  4. You totally could have downloaded a reference schematic for the board, figured out what traces were next to that hole, figured out what components they connected, and then soldered in some jumper wires to re-connect things instead of snapping the card in half =P

    1. thebinderclip_

      thebinderclip_

      Words cannot describe.. how terribly addled this guy is. This is definitely the most cringy thing I have seen here 

×