Jump to content

Snoz

Member
  • Posts

    13
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    Snoz got a reaction from Issac Zachary in Thread for Linus Tech Tips Video Suggestions   
    Maybe do a video/refresher on what to look for in SSD storage. I recently was looking into purchasing M.2 storage for a laptop. I found a 4TB drive which seemed like a great deal, only to look at reviews saying that it was slow compared to others. It turned out the particular brand used QLC memory and didn't have a DRAM cache - but used the systems RAM for caching. I then looked into other M2 drives and found that one should also be looking into the specs for the total data written over the life of the drive and the mean time between failures. The QLC M2 drive had low values for the total amount of data over the life of a drive, whereas MLC and TLC had higher values. For instance the 4TB drive I orignally looked at had lifetime of 800 TBW where as a drive like the Kingston KC3000 which used TLC was not only faster but had a  lifetime of 3.2PBW - so effectively can be overwritten with a lot more data before failing, however I guess this is the difference between a cheaper and more expensive drive. Given the option I would choose the drive with the longer life such as the Kingston SC3000 but given the cost I would probably choose a lower density 2TB drive which at this point in time is a more affordable option.

    Anyway, I thought this might be a good topic to talk about for people wanting to upgrade their storage. If I hadn't have seen bad reviews on the M2 4TB drive I had originally looked at and dug into why I wouldn't really have thought too much about it and probably have gone with the cheaper one.
  2. Agree
    Snoz reacted to Ejdesgaard in Our data is GONE... Again   
    Hello,
     
    Interesting video...
     
    You clearly need a storage solution that can be treated like cattle, not house-pets... Here is my suggestion for such a solution.
     
    ZFS might be that solution, for a pure scale-up(Staying in the same chassis) solution, but when you start to scale out, which clearly is the case here, you need to consider a storage solution that, in your case, is designed for both scale-out AND self-healing.
     
    It's my understanding, from the video, that you have 2 old servers, 2 newer server and plan to buy a few more new servers.
     
    At this scale, it starts to make sense to look into a scale-out orientated storage solution named Ceph https://ceph.io/en/
     
    Why ceph ?
    Glad you asked, it's a software defined storage(SDS) solution, designed with 2 main goals in mind.
    Data integrity Self-healing Scalability (many chassis, 1 cluster.) Here's a promo video from back in 2017, that gives a quick intro to what ceph is.
     
    What does Ceph run on ?
    Ceph, as the truly software defined storage solution it is, can run on any x86_64 hardware you decide to buy, or have around (can also run on newer 64bit arm)
     
    What does it cost ?
    It's open source https://github.com/ceph/ceph
    You can spend a bit of $ on a subscription from one of the companies that offer enterprise support on it, such as Red Hat, 45 Drives and others
     
    I hope you will consider to look into this, before you make a final decision on your next step
     
    My suggestion:
    From the  info that was provided in the video, I would do something along the lines of:
    Have a closer look at Ceph Assuming it's decided to go with ceph: Have a look at, at least, 3 new chassis, big enough to hold the content of "New Vault", preferably in a replica3 setup, or if pro's/con's for erasure-coding is in EC's favour, then setup the new pool on the cluster in a 4+2* EC configuration, with 2 chunks on each node (must be changed to 1 chunk on each node when all servers are in the cluster) Verify hardware integrity of the "new vault" chassis, followed by joining then to the new ceph cluster SMR drives are a no-go Move "Old vault" data to the ceph cluster Verify hardware integrity of the "old vault" chassis, followed by joining them to the new ceph cluster SMR drives are a no-go Buy another nice screen for the office, that shows the live status of the cluster via grafana (included in the Ceph deployment) Grafana dashboard json's for manual import: https://github.com/ceph/ceph/tree/master/monitoring/grafana/dashboards Ensure that you always have enough free space to allow for 1 node failure. * 4+2 means that the data will be chopped up into 4 chunks, then 2 aditional chunsk will be calculated (similar to raid6) and those 4+2=6 chunks will then be stored on the cluster, in accordance with  the failure domain configuration (2 chunks pr. node to begin with, and when all data is moved over and we have >=4+2+1 nodes in the cluster, then the failure domain configuration can be changed to 1 chunk pr. node)
  3. Like
    Snoz got a reaction from X-System in Post your Cinebench R20+15+R11.5+2003 Scores **Don't Read The OP PLZ**   
    Running cinebench via remote desktop to new server before it gets reinstalled. The lower score highlighted was from the same machine the other day.
     
     

×