Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Nick7

Member
  • Content Count

    186
  • Joined

  • Last visited

Everything posted by Nick7

  1. Did you verify /proc/sys/dev/raid/speed_limit_max ? It limits rebuild speed. Try setting it higher: # echo 200000 > /proc/sys/dev/raid/speed_limit_max You can also try following: # echo 4096 > /sys/block/md0/md/stripe_cache_size
  2. For pure speed, ZFS is not the winner. It has many useful and good features, but being the fastest aint one of it. PS: As far as MTU goes, in my tests it did increase throughput quite a bit.
  3. Set MTU to 9000, this will give you some extra speed.
  4. Err.... server mentioned is NOT a rack server! On HPE's site it claims 'HPE ProLiant MicroServer Gen8 is a small, quiet, and stylishly designed server'.
  5. TBH, I'd get used HP DL360 Gen8, and just load it with RAM... or Dell/IBM/other with 2011 socket and DDR3 (RDIMM) support.
  6. If you need lots of RAM, go with server motherboard which uses Xeon and registered RAM. RDIMM's are quite cheap on e-bay, since you can only use them in servers.
  7. Ah, cool... that actually does work, tested it. I was always working with RAIDZ, so for that it did not work (attaching mirror vdev). export/import also works for using UUID's, but it's important to do it. ashift=12 is for 4k disks. Often it's on by default, but on some occasions it can fail - so it's safer to specify it while creating. Good luck, and be careful not to overwrite wrong disks!
  8. There's several things you are doing wrong. First - you cannot modify disks/vdevs inside zpool. What you need to do in create zfs pool with mirrored devices. To achieve this in zfs you create mirror between disk and sparse file, and fail that file - so later you can replace it with real device. Second, it's always recommended to use partitions. Third, never use /dev/sdX names in zfs pool! Use by partition uuid. From top of my head, you should do something like: 1) Create 2TB and 1Tb sparse files: # dd if=/dev/zero of=/tmp/sparse_file1 bs=1 count=0 seek
  9. You can do that also by using FTP or SFTP. No need to use web interface. But there are also solutions for using web solution, just google a bit for it.
  10. .. and you plan to use 'web interface' for? Which functionality you need?
  11. Check: https://forum.odroid.com/viewtopic.php?f=168&t=40598 Basically, on not latest kernels you need to build it yourself. There are also other guides. No need to buy new motherboard/or another network card
  12. Did you check disk metrics during I/O what it shows?
  13. Windows also does not write immediately to disk, unless required by app (sync write). But, there's one thing about PrimoCache that is quite dangerous - defer writes can write data out of order, and in case if crash - it *will* lead to corruption, possibly silent (worst) - chkdsk passes, but data is corrupt. So, PrimoCache is nice to speed up reads, but quite dangerous for writes. On Linux, bcache is much better solution. More configurable, and can handle crash situations, so no data is lost.
  14. Silly question, but why do you need so much capacity only on SSD's on a gaming PC?
  15. You didn't mention one important thing - what capacity you intend to have?
  16. I don't understand what you'd gain this way? You still have single point of failure - SAS RAID card. SAS drive support 2 paths. Proper way is either to have 2 SAS RAID cars which support HA, or use two cards in IT mode and let host handle multipathing.
  17. That does not mean much. Is there constant I/O there? Does it just sporadically check some config file? It's huge difference if you have like just some small file/config accessed or really high I/O. You could have 100 PC's connected, and they do less I/O than one PC who does like video editing or something similar. Nope, that's problem with PC. If it were on NAS, it wouldn't matter if there are snapshots or not. Is Qnap properly configured? Qnap is quite configurable, and other solutions are not really much better in that regard, if at all.
  18. If there's no identify LED but there is activity LED for drives, you can login via SSH and use 'dd' from drive to /dev/null and check which disk has constant I/O activity. If that is not an option, next is to check cables by ports. If that is not an option, check serial number of drives, power off system, unplug drives and check S/N on them (yes, not nice way.. but if no other option available..).
  19. Agree. Tried cachecade, and it's not really that great. Out of software cache SW, I liked the most bcache under Linux. For Windows, Primocache is decent, although it has some of it's own disadvantages - but generally it's quite decent software.
  20. SAS connector can be connected to SAS and SATA drive. SATA connector only connects to SATA drive. It's the notches, and also SAS controllers support SATA drives, while not the other way round. You can get SAS breakout cable from 1 mini SAS to 4 SAS cables, which will have on top of them place to connect power, or you can get 1 mini SAS to 4 SATA cables, but in that case - you can connect that cable only to SATA drive and not to SAS.
  21. What does 'simultaneously 5 PC's' mean in your case? How much I/O are we talking about? Is it constant I/O from all 5 PC's? In case of power loss on a PC, snapshot will not help you. Snapshot is just 'state of disk at time XYZ'. It does help in case you erase something by mistake or similar. It does not help with power loss situations. Can you specify a bit more what is the issue with Qnap? Missing functionality? Performance? Something else?
  22. You can use even drives same size, but RAID size (logical volume) will be size of smaller one. Same as you can use different RPM, etc... However, performance/size will always be of smaller/slower drive.
  23. Increasing upload size for PHP in Docker is a bit more complicated. Even if you edit it, after updating docker it's lost. Check: https://github.com/docker-library/wordpress/issues/10 Look for 'uploads.ini' solution.
  24. Big paperweight? It's old. Old CPU, old technology, uses more power, loud (1U server).... not much use of it, IMO.
×