Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

alpha754293

Member
  • Content Count

    85
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About alpha754293

  • Title
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. alpha754293

    Need Help w/ Workstation Build

    What software(s) are you planning on using for CAD and CFD? What types of models and/or problems are you planning on studying/tackling? (I am asking because it will have an impact on the hardware.) To be fair though, I started learning CAD and CFD using one of the whole eMachines laptops (AMD Athlon 64 3200+, 1.25 GB of RAM). But the problems that I'm working with/on now, my cluster has 512 GB of RAM and 64-cores. So...if you have an idea of what you're looking to run, it will help.
  2. alpha754293

    HomeLab/HomeDatacentre and PowerUsers of the Forum

    For those who might be interested, here's my current testing results with GlusterFS: For those that might be following the saga, here's an update: I was unable to mount tmpfs using pNFS. Other people (here and elsewhere) suggested that I use GlusterFS, so I've deployed that and am testing it now. On my compute nodes, I created a 64 GB RAM drive on each node: # mount -t tmpfs -o size=64g /bricks/brick1 and edited my /etc/fstab likewise. *edit* -- I ended up removing this like from /etc/fstab, due in part to the bricks being on volatile memory, so I was recreating the brick mount points with each reboot, which helped to clean up the configuration stuff. (e.g. if I deleted a GlusterFS volume, and then tried to create another one using the same brick mount points, it wouldn't let me.) I then created the mount points for the GlusterFS volume and then created said volume: # gluster volume create gv0 transport=rdma node{1..4}:/bricks/brick1/gv0 but that was a no-go when I tried to mount it, so I disabled SELinux (based on the error message that was being written to the log file), deleted the volume, and created it again with: # gluster volume create gv0 transport=tcp,rdma node{1..4}:/bricks/brick1/gv0 Started the volume up, and I was able to mount it now with: # mount -t glusterfs -o transport=rdma,direct-io-mode=enable node1:/gv0 /mnt/gv0 Out of all of the test trials, here's the best result that I've been able to get so far. (The results are VERY sporadic and they're kind of all over the map. I haven't quite why just yet.) [root@node1 gv0]# for i in `seq -w 1 4`; do dd if=/dev/zero of=10Gfile$i bs=1024k count=10240; done 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 5.47401 s, 2.0 GB/s 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 5.64206 s, 1.9 GB/s 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 5.70306 s, 1.9 GB/s 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 5.56882 s, 1.9 GB/s Interestingly enough, when I try to do the same thing on /dev/shm, I only max out at around 2.8 GB/s. So at best right now, with GlusterFS, I'm able to get about 16 Gbps throughput on four 64 GB RAM drives (for a total of 256 GB split acrossed four nodes). Note that IS with a distributed volume for the time being. Here are the results with the dispersed volume: [root@node1 gv1]# for i in `seq -w 1 4`; do dd if=/dev/zero of=10Gfile$i bs=1024k count=10240; done 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 19.7886 s, 543 MB/s 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 20.9642 s, 512 MB/s 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 20.6107 s, 521 MB/s 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 21.7163 s, 494 MB/s It's quite a lot slower.
  3. alpha754293

    If I have four SATA 6 Gbps SSDs, what's faster?

    Thank you. So....GlusterFS (even over RDMA) isn't faster than XFS exported over NFS over RDMA? Interesting. Thank you for your feedback. (I was playing around with it/testing it with virtualised hosts, without the RDMA, so my results were inconclusive. I'm going try it again tonight on my compute nodes with RDMA to see if the RAM drives would be able to work faster. The thing that I might have stumbled acrossed through is that it looks like that GlusterFS doesn't allow me to create a distributed dispersed volume across the RAM drives (albeit understandably so), which effectively and functionally reduces the test down to how fast GlusterFS can write a file to RAM vs. the local system.) # for i in `seq -w 1 4`; dd if=/dev/zero of=10Gfile$1 bs=1024k count=10240; done
  4. If I have four SATA 6 Gbps SSDs, what's faster: Option 1: Put all four SSDs into a stripped zpool using ZFS on Linux/CentOS, on a head node, and exporting that ZFS pool over NFS over RDMA? Option 2: Put all four SSDs into a stripped RAID array using XFS, on a head node, and exporting that RAID array over NFS over RDMA? Option 3: Put one SSD into each of my compute nodes (four nodes in total), and format that as either ZFS or XFS (or ext4), and then use GlusterFS over RDMA? I am looking for maximum usable capacity and speed. Thank you.
  5. alpha754293

    HomeLab/HomeDatacentre and PowerUsers of the Forum

    Sort of. The idea really currently stems from I've got four nodes and I want to allocate half of the RAM to a RAM drive (tmpfs) and then using that as the source of the space ("data server") that then will serve the four nodes itself as the client. In other words, each node right now has 128 GB of RAM. If I only allocate half of the RAM to each local node, then each node will only get 64 GB. But if I can pool them together using GlusterFS, then all four nodes would be able to see and address a total of 256 GB (combined) which is more than any single node can address/provide. I'm not sure if that really means "hyperconverged" because I thought that converged meant something different.
  6. alpha754293

    HomeLab/HomeDatacentre and PowerUsers of the Forum

    For Gluster, can the data servers also be the clients or is the assumed model that the data servers are separate from the clients? Thanks. (I was trying to play with pNFS over the weekend and I couldn't figure out how to make the data servers and the clients export to the same mount point.)
  7. alpha754293

    "Server room" build

    Cool. Let me know if that works for you or not and if there's any other questions that you might have that might be able to help you. Thanks.
  8. alpha754293

    HomeLab/HomeDatacentre and PowerUsers of the Forum

    Yeah, I was reading about the difference between the two and at least one source that I found online said that GlusterFS is better for large, sequential transfers whereas Ceph work better for lots of smaller files or more random transfers. Yeah, I think that I've mentioned Lustre in my other thread as well. Still trying to decide whether I want to have a parallel/distributed RAM drive vs. enterprise SSDs, or whether I want to just get the SSDs and have a new head node that will pretty much just only do that (and present the enterprise SSDs as a single RAID0 volume to the network as "standard/vanilla" NFSoRDMA).
  9. alpha754293

    Server case ~30bay+

    Is there a specific reason why you're looking for a top loading chassis? (They make horizontally mounted drive chassis that can take that many drives now without significantly increasing the height of the rack mount chassis. Some of the newer chassis install the drives both at the front and rear of the chassis whereas before, it would have been only through the front of the chassis.)
  10. alpha754293

    Server case ~30bay+

    You might want to look at something like this: https://www.ebay.com/itm/Supermicro-4U-48-Bay-FREENAS-Storage-Server-2x-Xeon-Low-Power-L5640-6-Core-32GB/142950808625?hash=item2148883c31:g:hQcAAOSwheFasUjg For the price, it's pretty good.
  11. alpha754293

    "Server room" build

    It might be able to save you quite a bit of headache. You might end up spending more than $30 worth of your time trying to get into the switch using other means/methods vs. just spending the $30 and be done with it. It's too bad that you can't just "RENT" a cable like that, because if your switch is working (i.e. unlike my original managed Mellanox switch where it DID have an issue with it), then you only really need it once to do the initial setup and everything else after that, you can probably manage with just ssh.
  12. alpha754293

    HomeLab/HomeDatacentre and PowerUsers of the Forum

    Interesting. Thanks. Yeah, I'm trying to decide on what I want to do ever since I wore through the write endurance of all of my Intel SSDs. I'm trying to decide if I want to switch over to data center/enterprise grade SSDs (which supports 3 DWPD) or if I want to create a tmpfs on each of my compute nodes and then export it to a parallel/distributed file system like GlusterFS or Ceph or pNFS (although the pNFS server isn't supported in CentOS 7.6.1810), so I'm not sure what's better in my usage scenario. Upside with tmpfs (RAM drive) is that I won't have the write endurance issue that I am currently facing with SSDs (even enterprise grade SSDs). Downside with tmpfs is that it's volatile memory which means that there is a potential risk for data loss, even with high availability (especially if the nodes are physically connected to the same power supply/source). On the other hand, using the new usage data that I have from the newly worn SSDs, IF my usage pattern persists, then I might actually be able to get away with replacing the SSDs with enterprise grade SSDs and that will be sufficient over the life of the system. Not really sure yet, also only because the enterprise grade SSDs are larger capacity, and therefore; I might be inclined to use it more, especially if I DO end up deploying either GlusterFS or Ceph or alternatively, all of the enterprise grade will go to a new head node for the cluster, and it will just be a "conventional" NFSoRDMA export, which will simplify things for me. (Also might have the fringe benefit that it if is a new head node, I might be able to take advantage of NVMe.) Decisions, decisions, decisions (especially when, again, I'm trying to get the best bang for the buck, and working with a VERY limited budget.)
  13. alpha754293

    HomeLab/HomeDatacentre and PowerUsers of the Forum

    The other small news is that in the last month or two or so, I've managed to kill all of my Intel SSDs by burning through the write endurance limit on ALL of the drives. So, now, I'm looking to see what I can do about as all of the consumer grade drives are being pulled from the micro cluster. The latest round of SSD deaths occurred in about a little over two years of ownership (out of a 5 year warranty), and based on the power-on hours data from SMART, it's actually even sooner than that -- about 1.65 years. So yeah...that happened. Anybody here ever played with gluster/pNFS/Ganesha before?
  14. alpha754293

    Crazy NAS/data/file server build

    I've just had my second through fifth consumer grade SSD failure so as a result of that, all SSDs, especially consumer grade SSDs, have been eliminated from my production environment compute cluster. (Four more drives just failed today - wore through the endurance limit again, and this time, I managed to accomplish that task/feat in a little over two years.) Enterprise grade SSDs are also prohibited due to the cost and the need for something that has a SUBSTANTIALLY higher write endurance, which again, comes in at a price point that is not financially feasible for me. This project is now officially declared DOA.
×