Jump to content

Edelrat

Member
  • Posts

    371
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Edelrat got a reaction from KENZY9 in 165hz vs 144hz for FPS gaming   
    Hello everyone,
     
    I want to upgrade my monitor. I am looking at two monitors:
     
    Asus PG278Q
    Asus PG279QR
     
    Yes I know, they are (mostly) the same monitor. The one feature that I am really not sure about, is 165hz.
    I am playing FPS games on a highly competitve level, so I am not really sure if I can profit from the +21hz. I have already done some research, but there isn't really a answer. Some say, that there is no difference at all, some say there is a big difference..
    My plan is to buy the monitor used (so I can save some money), but the QR is still 100-200€ more expensive.
     
    What is your guy's oppinion on this?
     
    Thanks in advance!
  2. Like
    Edelrat got a reaction from 3pwood in First time (onboard) RAID setup help required.   
    You only need 3 drives for RAID 5, I don't know where that 4 drive thing comes from. Maybe people confuse it with RAID10 or RAID 6.
    Regarding your SATA-port question: Read in your manual which SATA-port shares bandwith with the m.2 slot and don't use that one.
    Just make sure to plug in all of the drives into the same controller (the Intel one preferably) , and that your controller is set to RAID-mode in the BIOS.
  3. Like
    Edelrat got a reaction from leadeater in Windows Server 2016 and FC disk shelf   
    @leadeater
    With the other HBA, the drives didn't show up at all. I will probably just use FreeNAS as my OS and pass the drives though to a VM, will probably be the easiest thing to do.
    Thanks for your help anyway!
  4. Like
    Edelrat got a reaction from leadeater in Windows Server 2016 and FC disk shelf   
    My best guess is initiator mode too. I doubt it's anything EMC specific, since I have the same model of shelf working currently (even with the same HBA, but with FreeNAS). I really doubt that it's the HDDs too, because at the moment they are just random drives I had laying around, most of them never saw an EMC controller. I will try another HBA next (with the one I am about to try I at least was able to use a iSCSI LUN from FreeNAS over FC without any problems under windows, maybe that will work.
    I will let you know what my results are.
  5. Agree
    Edelrat got a reaction from Pudgestool in Hard Disk connectivity   
    The HBAs Linus uses in his new storinators are HighPoint Rocket 750s (if I remember correctly), he has two in each new storinator.
    One of the ports on this card supports 4 SAS or SATA drives (there are either backplanes which do the "splitting" or you can get cables that split into 4 SATA-Connectors).
    In general those cards are either called HBA (host bus adapter) which basically pass-though each individual drive to the OS, or RAID cards, which create RAID-arrays over multiple drives and present it to the OS as one big drive.
  6. Informative
    Edelrat got a reaction from PorkishPig in Laptop to Desktop File Sharing   
    Transfering files between a PC connected with ethernet and a notebook connected with wifi is possible, yes.
    I think your issue with pings is your windows firewall. Try to disable it on both clients and see if pinging works now.
    I don't have an english version of Windows avaliable at the moment, so I can't look up the specific rule (at least not in english).
    You can find it pretty easily by sorting the incoming rules by protocol. There should be one or two rules which say something with "echo request" and ICMPv4 protocol (I assume you don't use IPv6?).
    The firewall you are talking about here is the firewall between LAN and WAN, it has nothing to do with clients in the LAN trying to communicate with each other. I would highly recommend you to keep this one enabled.
  7. Agree
    Edelrat reacted to leadeater in Raid Noob questions.   
    No RAID 5 & 6 is actually faster, if you have an LSI hardware RAID card this has been true for a long time. RAID 10 also has bad scaling with number of disks, if you have a 24 HDD server it'll be a lot slower for anything other than very heavy random write. Also in a 24 HDD server RAID 10 is getting on the bounds of long term reliability concern, you only need 1 mirror set to fail and the array is lost.
     
    It's simply an active usable spindle count issue, RAID 10 gets worse as the number of disks increase and it tips over fairly quickly in favor of parity RAID.
     
    RAID 6 has faster seq read and write, very similar read IOPs and a bit less write IOPs while also being able to add more disks to an existing array, RAID 10 cannot be expanded and is only better at write IOPs but not always so it shouldn't be the default choice and only used if you need it.
     
    If you see really bad parity performance you either don't have the BBU installed or it's failed, without that parity performance is bad.
     
    Hardware based storage arrays from Dell, IBM/Lenovo, Netapp etc moved to pure double parity disk pooling many years ago which all incidentally use LSI hardware internally and are just re-badges.
     
    Software RAID is a completely different story.
  8. Like
    Edelrat reacted to leadeater in Raid Noob questions.   
    This, much easier.
     
    Hardware RAID 5 & 6 is faster and safer than 10. RAID 10 is only good in low disk numbers or in very write heavy continuous usages with major downs sides being 50% usable capacity and you cannot expand the array with more disks. RAID 5 & 6 can be online expanded with more disks and almost always performs better than RAID 10.
     
    RAID 10 was faster back in the 90's and early 2000's but modern RAID cards have gotten much faster processors to do the parity calculations and a lot more cache to accelerate write performance.
     
     
    If Windows then Storage Spaces unless you must have the boot volume as part of a RAID array then hardware is your only choice. In a lot of ways hardware RAID is nicer, it's what I'm more used to, but I also use Storage Spaces but on a simple desktop hardware RAID will be faster.
  9. Like
    Edelrat got a reaction from leadeater in Raid Noob questions.   
    Buying a single HDD will be easier, and potentially save you some headache, its up to you to decide whether its worth it or not.
    Also, I don't think hybrid drives are really worth it, I usually stick to the normal SSD + HDD combo.
     
    Anyway, back to your questions:
    I would use hardware-raid myself, unless you are using ZFS (which won't be the case if you are installing windows).
    You can install Windows directly onto it, you may need to load the driver during the install tho (again, I would recommend a SSD for OS and some games or other stuff + HDD for anything else).
    I have never done RAID with hybrid drives, but I don't think it will make a huge difference (if any at all).
    For your use case you won't need a extra RAID controller, the onboard/Intel controller on your MB will do just fine.
     
    Also, if all you want is speed, just use RAID0, it will save you two drives. If you also want redundancy, use RAID5. For "normal" home use it should be good enough in terms of speed.
  10. Agree
    Edelrat got a reaction from GDRRiley in Fibre Channel hard drive   
    The only way I can think of is a fibre channel disk shelf.
    One example would be a NetApp DS14MK4.
    You put the hdds in the shelf, then connect the shelf via a HBA to your PC.
  11. Informative
    Edelrat got a reaction from Flexedbmw80 in 10g NAS through desktop   
    You could pass it through, I think, but I wouldn´t do that.
    It is only one ethernet-cable more, why is it a "real pain"?
     
    If your desktop is the only thing that should be connected with 10g I would do it like that:
     
    - add a 10g card to your NAS and desktop
    - connect those two directly (with SFP+ or RJ45, I would do RJ45 tho)
    - connect your NAS via it´s onboard LAN to the rest of your network, same with your desktop
  12. Like
    Edelrat got a reaction from Brozz in Filer server OS   
    I can recommend FreeNAS too.
    Once you get it set-up correctly, it works like a charm.
    When using FreeNAS (or any OS using ZFS for that matter) make sure that ZFS has direct access to the drives. So either directly connect each drive to your mainboards SATA-ports, or use a HBA. If you were planing on using a RAID-controller flash it to IT mode.
  13. Agree
    Edelrat got a reaction from Brozz in Filer server OS   
    Not really, since ZFS needs to have direct access to the drives. When running it in a VM, it adds at least one layer between ZFS and the drives.
    That said, FreeNAS can run in a VM, but running it directly on hardware is better.
    You could also install FreeNAS and run VMs on it.
  14. Like
    Edelrat got a reaction from leadeater in ZFS RAID-Z2 + Cache or 2x RAID-Z striped   
    Writes never were good, replacing the bad disk didn't make any difference.
    Since the 335 Series SSD arrived today, I decided to throw that one in to see if it would change anything.
    And look at that. Read speeds using dd are still around 5.5 GB/s, write speeds skyrocketed up to 3.2 GB/s. I only did one test for now, I don't think much will change when doing more tests.
     
    After that I decided to test speeds with one of my VMs on ESXi (I didn't use iperf for now). I instead moved one VM over to the FreeNAS box and ran CrystalDiskMark on it. Speeds were 410MB/s read and write, 4Gb FC limiting the speeds here.
     
    Overall I am super happy with the results now. I have ordered a 160GB S3500 SSD by Intel, since I found one for pretty cheap (40€) and it has Data-Loss Protection (Intels definition for integrated capacitors to write the content of the drives cache into NVRAM in case of a power loss), so that will be the SLOG-SSD for my box.
    Now I just have to wait for the SSD and some PCIe risers to arrive, and I can finally go ahead and properly deploy both of my FreeNAS boxes.
     
    Thank all of you for your help and patience!
  15. Like
    Edelrat got a reaction from Mikensan in ZFS RAID-Z2 + Cache or 2x RAID-Z striped   
    Writes never were good, replacing the bad disk didn't make any difference.
    Since the 335 Series SSD arrived today, I decided to throw that one in to see if it would change anything.
    And look at that. Read speeds using dd are still around 5.5 GB/s, write speeds skyrocketed up to 3.2 GB/s. I only did one test for now, I don't think much will change when doing more tests.
     
    After that I decided to test speeds with one of my VMs on ESXi (I didn't use iperf for now). I instead moved one VM over to the FreeNAS box and ran CrystalDiskMark on it. Speeds were 410MB/s read and write, 4Gb FC limiting the speeds here.
     
    Overall I am super happy with the results now. I have ordered a 160GB S3500 SSD by Intel, since I found one for pretty cheap (40€) and it has Data-Loss Protection (Intels definition for integrated capacitors to write the content of the drives cache into NVRAM in case of a power loss), so that will be the SLOG-SSD for my box.
    Now I just have to wait for the SSD and some PCIe risers to arrive, and I can finally go ahead and properly deploy both of my FreeNAS boxes.
     
    Thank all of you for your help and patience!
  16. Like
    Edelrat got a reaction from dalekphalm in ZFS RAID-Z2 + Cache or 2x RAID-Z striped   
    Writes never were good, replacing the bad disk didn't make any difference.
    Since the 335 Series SSD arrived today, I decided to throw that one in to see if it would change anything.
    And look at that. Read speeds using dd are still around 5.5 GB/s, write speeds skyrocketed up to 3.2 GB/s. I only did one test for now, I don't think much will change when doing more tests.
     
    After that I decided to test speeds with one of my VMs on ESXi (I didn't use iperf for now). I instead moved one VM over to the FreeNAS box and ran CrystalDiskMark on it. Speeds were 410MB/s read and write, 4Gb FC limiting the speeds here.
     
    Overall I am super happy with the results now. I have ordered a 160GB S3500 SSD by Intel, since I found one for pretty cheap (40€) and it has Data-Loss Protection (Intels definition for integrated capacitors to write the content of the drives cache into NVRAM in case of a power loss), so that will be the SLOG-SSD for my box.
    Now I just have to wait for the SSD and some PCIe risers to arrive, and I can finally go ahead and properly deploy both of my FreeNAS boxes.
     
    Thank all of you for your help and patience!
  17. Agree
    Edelrat got a reaction from GDRRiley in Filer server OS   
    Not really, since ZFS needs to have direct access to the drives. When running it in a VM, it adds at least one layer between ZFS and the drives.
    That said, FreeNAS can run in a VM, but running it directly on hardware is better.
    You could also install FreeNAS and run VMs on it.
  18. Agree
    Edelrat got a reaction from Oxydoreduction in Filer server OS   
    Not really, since ZFS needs to have direct access to the drives. When running it in a VM, it adds at least one layer between ZFS and the drives.
    That said, FreeNAS can run in a VM, but running it directly on hardware is better.
    You could also install FreeNAS and run VMs on it.
  19. Informative
    Edelrat reacted to dalekphalm in ZFS RAID-Z2 + Cache or 2x RAID-Z striped   
    @MEOOOOOOOOOOOOW
     
    I have two things to add here:
     
    1. JBOD vs IT Mode - the reason why IT mode is so important for ZFS, is that the bitrot detection and prevention mechanisms of that filesystem rely on having full SMART access to the drive. By using JBOD, the card might pass some info, but won't give FreeNAS full hardware access to the drive. The fact that you can't see Serial Numbers is a sign that FreeNAS doesn't have the hardware access it should - plus if you have drive failures, not being able to see the SN can make it a pain in the ass to figure out the correct drive. The ada0, ada1, ada2, etc, designators in FreeNAS can arbitrarily change, so those aren't a sure way to tell which drive is which.
     
    2. RAM - FreeNAS/ZFS needing lots of RAM is actually a myth/urban legend (and, for the most part, an incorrect myth). If you are using Deduplication - which can be very useful - you will need as much RAM as you can give it (This is where the typical "1GB of RAM per 1TB of storage" rule comes from).
     
    If you are not using Deduplication, nor encryption, then you don't need crazy insane amounts of RAM. Usually 8GB is totally fine.
     
    This has been tested by the current FreeNAS devs.
  20. Like
    Edelrat got a reaction from leadeater in ZFS RAID-Z2 + Cache or 2x RAID-Z striped   
    My VMs will be running on ESXi hosts and those will be connected to the FreeNAS box with 4 Gb/s FC. Meaning that the 16GB RAM will be for ZFS only.
     
    I already have a second FreeNAS box I am working one, this one will be storing mainly backups I create of the VMs, and also some games (I have FC running to my main PC too ) This one will "only" have 8GB RAM, which I was a little concerned about at first, but those concerns have gone away now.
     
    I also just flashed the controller to IT mode, works like a charm. SN are now all showing correctly.
    To help me identify drives, I simply wrote their SN into an Excel sheet the way they are arranged when looking at the chassis. Should get the job done.
  21. Like
    Edelrat got a reaction from dalekphalm in ZFS RAID-Z2 + Cache or 2x RAID-Z striped   
    My VMs will be running on ESXi hosts and those will be connected to the FreeNAS box with 4 Gb/s FC. Meaning that the 16GB RAM will be for ZFS only.
     
    I already have a second FreeNAS box I am working one, this one will be storing mainly backups I create of the VMs, and also some games (I have FC running to my main PC too ) This one will "only" have 8GB RAM, which I was a little concerned about at first, but those concerns have gone away now.
     
    I also just flashed the controller to IT mode, works like a charm. SN are now all showing correctly.
    To help me identify drives, I simply wrote their SN into an Excel sheet the way they are arranged when looking at the chassis. Should get the job done.
  22. Like
    Edelrat reacted to leadeater in ZFS RAID-Z2 + Cache or 2x RAID-Z striped   
    A bit of yes and a bit of no. Comes down to how many VMs you are going to be running, how much I/O load each one will generate peak and sustained, and how much RAM FreeNAS has.
     
    It's always something you can add later if required.
  23. Agree
    Edelrat reacted to leadeater in Base ESXi 6.5 Install vs ESXi + vCenter VM?   
    And I still hate the web client compared to the GUI app lol.
  24. Informative
    Edelrat got a reaction from leadeater in Backup-Software for ESXi 6.0 free   
    This is probably what I am about to do if the free version doesn't offer all the things I need.
     
    I dont have the capacity (HDD and RAM) to move all my VMs to one host. I am working on getting one NAS/storage for both, but at the moment I only have the HDDs in the servers.
  25. Agree
    Edelrat reacted to Mira Yurizaki in Order all at once or part by part   
    If shipping is free, then whatever works for you. If shipping is not free, then you should buy as many as you can at the same time.
×