Jump to content

Nick7

Member
  • Posts

    228
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    Nick7 got a reaction from Needfuldoer in 40Gbps NAS/Switch what OS should I use and is my CPU enough.   
    I'm sorry... but you talk about 40Gbit/s, which is a LOT. Rarely who needs such speeds. Nice to have a brag about it, yes. Needing it? Well.. different story.
     
    Feeding 40Gbit/s from drives, any drives (including NVMe), is a lot.
  2. Agree
    Nick7 got a reaction from LIGISTX in 40Gbps NAS/Switch what OS should I use and is my CPU enough.   
    I'm sorry... but you talk about 40Gbit/s, which is a LOT. Rarely who needs such speeds. Nice to have a brag about it, yes. Needing it? Well.. different story.
     
    Feeding 40Gbit/s from drives, any drives (including NVMe), is a lot.
  3. Agree
    Nick7 got a reaction from Chriexpe in Server upgrade, is the Xeon E5 2666v3 + x99 motherboard still a good kit?   
    Newer gen 'non-server' will blow away that config for less $$$ and less cores, and much less power draw.
    Basically, only reason to use these days X99 socket is if you need lots of PCIe lanes and lots of PCIe slots for I/O.
     
  4. Informative
    Nick7 got a reaction from OhioYJ in SFP+ Cables?   
    Depends on what equipment you are connecting.
    Not every equipment supports every SFP, so it depends what you are connecting.
  5. Informative
    Nick7 got a reaction from Blue4130 in Poor RAID-5 performance.   
    RAID5 with Storage Spaces is known for horrid write performance.
    However, there is light at the end of the tunnel!
     
    Check this page: https://wasteofserver.com/storage-spaces-with-parity-very-slow-writes-solved/
     
    It explains how to setup RAID5 in Storage Spaces properly, and with that you can get decent write performance.
    I have tested this with 5 drive setup, and it worked almost as fast as with SAS RAID controller in RAID5.
  6. Agree
    Nick7 got a reaction from Needfuldoer in How long to learn how to run and maintain a windows server?   
    In 3 days you may learn how to click something, and perform some tasks. How to try and google some solutions.
    Will that make an adequate sysadmin for that role? No.
     
    The known Dunning-Kruger effect:

     
    After 3 days you are just climbing the 'peak of Mt. Stupid'.
  7. Agree
    Nick7 got a reaction from jde3 in Bit rot on mirrored folders over LAN   
    While bit rot is possible, it's quite uncommon. There was quite a hype about it many years ago, and it's still missquoted as if it can often happen.
     
    ZFS is good as it has checksum checking, and for home use it's one of best options.
     
    In enterprise segment (where LMG should be), you should use proper equipment. While jerry-rigged systems can work good, they require attention, what Linus forgot to do. And that was reason for issues he faced. not bit rot, but simple HDD's dying. For each HDD it's not a question if HDD will die - but *when* it'll die.
     
    On enterprise side of things, this is reason why 520 byte format is often used on HDD's - to store CRC too, so data can be easily verified. There's also T10, etc....
     
    IMO, Linux should use some *proper* storage system for critical data, or have people who spend more time on current systems.
  8. Like
    Nick7 got a reaction from Electronics Wizardy in Looking for a 3-node cluster setup, custom build, please review my choices.   
    In vast majority of cases on virtualization platforms - RAM is constraint, not a CPU.
    So it's quite often better to invest more in RAM then to buy extra servers (extra CPU power).
  9. Agree
    Nick7 got a reaction from NumLock21 in How to setup JBOD   
    I'd suggest to use each drive as separate volume.
    In JBOD - any drive dies, you lose ALL data.
  10. Like
    Nick7 got a reaction from Levent in Would upgrading SAS HDDs to SATA SSDs for MSSQL help in this case?   
    First, you need to verify it's I/O bottleneck.
    How is response time from drives? What I/O numbers you get (and how many spindles)?
    How big is your data - is it possible it's already mostly in database buffer, so it's not even read from HDD during queries?
    Do you have multiple queries, and have locking on DB - this can also cause longer times?
    As leadeator said - did you check you are not missing any indexes?
     
    When you are certain it's I/O bottleneck, and there is no other way to reduce I/O - in that case SSD's will help.
  11. Agree
    Nick7 got a reaction from Haraikomono in NAS with web interface setup   
    You can do that also by using FTP or SFTP. No need to use web interface.
    But there are also solutions for using web solution, just google a bit for it.
  12. Like
    Nick7 got a reaction from Pietro95 in Building a Proxmox Virtualization Server   
    Main difference is in price.
    Read Intensive are in consumer world usualyl 'TLC' drives, and can be written less.
    Next one can be written more (3 drive writes per day, vs 1), and will be more expensive.
    For majority of users, RI drives are good enough. They are even used in low/mid/high end storage systems too.
  13. Like
    Nick7 got a reaction from Pietro95 in Building a Proxmox Virtualization Server   
    Few things:
    1) 24-core CPU with 128GB RAM is in 90% of cases overkill. In vast majority of cases, it's RAM that is issue, not CPU. Only if you really are running simultaneously CPU intensive tasks you need 24 cores with only 128GB RAM.
    2) @NelizMastr 'Yes, but RAID1 means half the write performance and it's not scalable. You can't expand a RAID1.' -> this is false. RAID1 has about the same write performance as single drive, but about 2x read performance.
    3) Proxmox works with HW RAID. Since in this config you have only 2 drives, you should use onboard RAID controller for RAID1 (mirror) on those drives. Install OS on them, and VM's. SSD should be used for caching, and few VM's that require fast disk access. Keep in mind that SSD has no redundancy at all.
    4) If you do need HDD's (price), SAS drives, especially 10kRPM are decent. SSD's are, of course, faster, but 10k SAS drives are quite fast. Way faster than usual SATA 7200rpm.
     
    Overall, if you ask me, I'd just go with CPU of upto 12 cores, and use extra cash for more drives (preferably SSD's), possibly even more RAM, depending what you intend to run.
  14. Like
    Nick7 got a reaction from Electronics Wizardy in Building a Proxmox Virtualization Server   
    Few things:
    1) 24-core CPU with 128GB RAM is in 90% of cases overkill. In vast majority of cases, it's RAM that is issue, not CPU. Only if you really are running simultaneously CPU intensive tasks you need 24 cores with only 128GB RAM.
    2) @NelizMastr 'Yes, but RAID1 means half the write performance and it's not scalable. You can't expand a RAID1.' -> this is false. RAID1 has about the same write performance as single drive, but about 2x read performance.
    3) Proxmox works with HW RAID. Since in this config you have only 2 drives, you should use onboard RAID controller for RAID1 (mirror) on those drives. Install OS on them, and VM's. SSD should be used for caching, and few VM's that require fast disk access. Keep in mind that SSD has no redundancy at all.
    4) If you do need HDD's (price), SAS drives, especially 10kRPM are decent. SSD's are, of course, faster, but 10k SAS drives are quite fast. Way faster than usual SATA 7200rpm.
     
    Overall, if you ask me, I'd just go with CPU of upto 12 cores, and use extra cash for more drives (preferably SSD's), possibly even more RAM, depending what you intend to run.
  15. Agree
    Nick7 got a reaction from leadeater in UnRaid - All in 1 Storage solution - Raid 10 vs zfs   
    I did not say ZFS is *slow*, but it's slower compared to other non-COW filesystems.
    If you have low count of drives - like 3-4 drives, and you want max performance, ZFS is poor choice.
    ZFS definitely has it's reasons and place to use (well, I use it in my NAS with 3 drives in RAIDZ1), but performance is not it's best suite.
    ZFS does get 'better' with lots of RAM, when lots of stuff you need is in ARC, but similar can be said for other FS's when stuff you use is in RAM cache.
    Don't get me wrong, I love ZFS, and really prefer it myself. But if performance is 1st, and reliability/features 2nd, ZFS is not way to go.
  16. Like
    Nick7 got a reaction from Lord Mirdalan in RAID controller question   
    There are quite a few misconceptions about ZFS and 'bare metal' access to drives.
    ZFS does NOT have ability to read things like SMART or similar, so bare access is NOT required, nor does it have any advantage.
    Using RAID controller in front of ZFS is not a problem at all, nor does increase chance of catastrophic failures, as long as you do not make major mistakes.
     
    ZFS has advantage against standard RAID controllers due to checksum checking in case there is fault in RAID. With checksums it can detect which drive still has proper working data, if there's error on disk. RAID controllers may silently read errors (like so often quoted bit rot issue).
    In enterprise storage systems this is combated again with checksums, and one of reason why some drives are formatted with 520 bytes sectors, or even more.
     
    Back to ZFS - having 'bare metal' access will not make that field more reliable, nor safer, compared to ZFS RAID where you create one logic vdisk on each physical disk.
    What you do want to avoid is creating RAID0/1/5/6 on RAID controller itself. You want to pass drive as it (one vdev over one whole drive) to ZFS, so ZFS uses checksuming on those disks instead of RAID controller.
     
    Also, you *can* use writeback on RAID card in this case, as it *will* improve overall ZFS performance, but you must have battery too. Same applies as with other filesystems.
     
    Again, even if you do make any major mistakes, ZFS's scrub system is quite good, and can detect if there are any 'silent' errors too.
     
    Bottom line, it's easiest to use 'bare metal' disks. Using disks thru RAID controller is fine too. Using RAID0/1/5/6 on RAID controller, and passing that to ZFS is not recommended, but yes.. it will work, but again - WHY make your life harder?
     
    Issue with this is often over exaggerated, same as 'required ECC memory', and so many times quoted '1GB RAM for 1TB of HDD'.
     
  17. Like
    Nick7 got a reaction from gdb123456 in Gaming from a NAS storage   
    It's possible.
    But, you need iSCSI, and each client needs it's own 'virtual disk'.
    What would help you is using ZFS with deduplication, as most of data would be deduplicated.
    You do want to have 10G for NAS itself, while 1G is enough for clients, though more would be preferable.
  18. Agree
    Nick7 got a reaction from Trikoo in Gaming from a NAS storage   
    It's possible.
    But, you need iSCSI, and each client needs it's own 'virtual disk'.
    What would help you is using ZFS with deduplication, as most of data would be deduplicated.
    You do want to have 10G for NAS itself, while 1G is enough for clients, though more would be preferable.
  19. Agree
    Nick7 got a reaction from flo_306 in Raid setup question for 8tb drives   
    Do *not* use Windows RAID5 (in Storage Spaces).
    Performance for writes is abysmal.
    In this case, it's even better to use MOBO RAID5.
    But, given price of 8TB drives, it would be better to get proper RAID card from e-bay.
    Performance-wise, and any other way, it's best solution for you.
  20. Agree
    Nick7 got a reaction from LIGISTX in RAID controller question   
    There are quite a few misconceptions about ZFS and 'bare metal' access to drives.
    ZFS does NOT have ability to read things like SMART or similar, so bare access is NOT required, nor does it have any advantage.
    Using RAID controller in front of ZFS is not a problem at all, nor does increase chance of catastrophic failures, as long as you do not make major mistakes.
     
    ZFS has advantage against standard RAID controllers due to checksum checking in case there is fault in RAID. With checksums it can detect which drive still has proper working data, if there's error on disk. RAID controllers may silently read errors (like so often quoted bit rot issue).
    In enterprise storage systems this is combated again with checksums, and one of reason why some drives are formatted with 520 bytes sectors, or even more.
     
    Back to ZFS - having 'bare metal' access will not make that field more reliable, nor safer, compared to ZFS RAID where you create one logic vdisk on each physical disk.
    What you do want to avoid is creating RAID0/1/5/6 on RAID controller itself. You want to pass drive as it (one vdev over one whole drive) to ZFS, so ZFS uses checksuming on those disks instead of RAID controller.
     
    Also, you *can* use writeback on RAID card in this case, as it *will* improve overall ZFS performance, but you must have battery too. Same applies as with other filesystems.
     
    Again, even if you do make any major mistakes, ZFS's scrub system is quite good, and can detect if there are any 'silent' errors too.
     
    Bottom line, it's easiest to use 'bare metal' disks. Using disks thru RAID controller is fine too. Using RAID0/1/5/6 on RAID controller, and passing that to ZFS is not recommended, but yes.. it will work, but again - WHY make your life harder?
     
    Issue with this is often over exaggerated, same as 'required ECC memory', and so many times quoted '1GB RAM for 1TB of HDD'.
     
  21. Informative
    Nick7 got a reaction from Lord Mirdalan in RAID controller question   
    That's not really true.
    ZFS and unraid can use drives behind proper RAID card. But, there's:
    1) no need for such thing, as both have their own redundancy they use
    2) it's better unraid/ZFS use their own checks to fix any possible errors
     
    Basically, there's no benefit at all using proper RAID card with ZFS/unraid.
    So make your life simpler and get HBA for that.
     
    From what I gather, you need just SATA ports?
    Why not get simple SATA PCIe card instead?
  22. Agree
    Nick7 reacted to Maticks in What Server for Me   
    That's what you were talking about by homelab discord. that is some serious hate.
     
    It is not an enterprise solution for disks it is not a raid solution where performance is stacking based on the size of your array.
    Unraid is always limited to a single disk performance because you read and write to your data drive and you make a copy of that on your parity drives (simple description).
    Folders of data can be scaled across drives but they are individual disks of XFS Filesystem data.
     
    The comment around Dockers crashing and VM's crashing, there was a bug once in 2018 where a release of unraid had dockers and vm's hanging it was patched and fixed in 24 hours. It was actually caused by the slackware linux kernel when they upgraded a full release in kernel revision.
    Since then Unraid now has Stable and Next in the update system. Next is used for testing unraid releases and once they go RC they will push it to Stable. Then everyone gets it mostly bug free. I only run Stable.
     
    The other thing that cause VM's to hang and Dockers to crash is when the filesystem is full, this is a unix thing not an unraid thing, not a linux thing. Even Mac's that are unix based if you fill up your hard drive you have a hell of a time trying to boot it to clear the drive.
    The default folder for Dockers is the Cache pool, its the answer to high performance that the array doesn't have of spinning drives. if you need fast VM's and Docker Apps you'd run them in the Cache Pool. Most people only throw in 256GB SSD's that doesn't take much to fill.
     
    two parity drives isn't a new thing for unraid? it's been in unraid since 2014, the overhead is minimal to the processor its about 150Mhz when your doing a rebuild.
    the CPU overhead for ZFS is all the time and its on par with the unraid overhead but only when its rebuilding. normal operation its hardly noticeable.
     
    The VM implementation on Unraid isn't the best its not like ESXi i would say its more than enough if you want to pass through hardware etc.
    If you wanted advanced functions like snapshot mirroring it's not there.
    That is the only thing in this whole post of his in discord that i kind of agree with the webui for VM's is garbage.
     
    The Docker implementation is really good and the app community has a lot of content in it.
    I've tried a few Docker managers in linux and other solutions Unraid's is simple and easy to update and roll back on. the Webui is very easy to use for their docker system.
     
    The point of unraid is you take your old gaming system and make it your NAS, you throw different size drives in it and some redundancy, but you don't have speed.
    If you are doing plex streaming and running vm's for home testing for gather of knowledge its perfect, if you want to take those disks out shove them into another computer unplug the usb drive with the boot data on it throw it into another system it will just work.
     
    If you are running a business and your video editing premier off a shared drive it's just not going to have the performance.
     
    I am no newbie, i've been a system admin for 3 ISP's for 12 years and a network engineer now days.
    I could argue whats better than unraid there is so much that is but there is flexiablity with unraid you don't get from other solutions.
    The guy who wrote that piece on discord has no idea what he is talking about, there is pieces of information in there that have some truth the rest of it is baseless rubbish.
     
    I do have a Dell R720 Server and because its a server i run Raid6 on it using the hardware raid controller in it and have esxi installed.
    I do have Unraid running as a VM on top of esxi because Dockers i find are easier to run in unraid.
    Dell make their own version of Esxi for free which includes the drivers for the os, so the fans screaming isn't an issue.
    HP i know you need to install drivers in windows to get the fans to go quiet, HP might have esxi also.
     
    I think really at the end of the day you can try a few solutions and see what works for you.
    If say i really didn't like Apple and i started writing rants of part truths its no different to this.
     
    As a comparison i could say Console gaming isn't real gaming 40FPS is rubbish, PC's are the only true gaming systems at 100+FPS. 40FPS is not playable it's garbage. its shit.
    But Consoles still have 500 Million people who think they are fine and for them 40FPS is plenty fast enough.
     
     
     
     
     
  23. Like
    Nick7 got a reaction from Trdamus in What is FASTER Bios raid 0 or windows raid 0?   
    For pure speed, about the same.
    For simplicity, I'd go with BIOS RAID.
  24. Like
    Nick7 got a reaction from Hold-Ma-Beer in Backing up using RAID? Please help!   
    First, having backup on one single computer is bad idea. If you get cryptolocker, you will lose all your data AND your backup.
    RAID helps with redundancy, be it main computer or backup drive - it helps if one hard drive fails. It does not help for any other type of failure.
     
    As for RAID - you can use 1TB with 500GB drive in RAID1 (mirror), but you will use only 500GB of larger disk and have rest of space unusable. Also, performance will be most likely lower as I suspect 500GB drive is slower than your desktop one.
     
    Suggestion?
    Get some cheap NAS, and use that for backup, as well as 3rd option (USB drive that you occasionally connect or some cloud storage) for family pics.
    It's really easy to lose pics you value other way.
  25. Like
    Nick7 got a reaction from FloRolf in RAID options   
    I have to disagree.
    RAID6 is the most safe 'standard' RAID type. More safe than RAID10 - where if you have bad luck double disk failure means game over.
    While RAID6 rebuild time is longer, it's still more safe than RAID10.
    Performance-wise, reads are excellent on RAID6, but writes suffer.
    Nonetheless, on any new storage system - RAID6 is considered nowdays default option, for a reason.
    There's also the capacity question, where RAID6 again wins.
    RAID10 is basically only used if you are using HDD's and require decent write speed.
     
    For home-use, and storage-type NAS, look no further than RAID6.
×