Jump to content

Lubi97

Member
  • Posts

    143
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    Lubi97 reacted to Mikensan in FreeNas/unRAID? Need Plex/VMs/Webserver   
    Went to go look it back up as my memory was something along the lines of it being single parity with even number of disks causes an issue with the block sizes? I couldn't remember exactly but went to go look it up. Found the below link which says older ZFS documentation said it was an issue but is no longer the case. So old knowledge it seems like.
    https://doc.freenas.org/9.3/zfsprimer.html#zfs-primer
     
    Also RaidZ1 isn't the same as Raid5 when calculating space, but pretty close. Using http://wintelguy.com/zfs-calc.pl his raw space is about 58 and drops down to 47 after RaidZ1 - about 10TB lost vs the assumed 7.2 (one disk) loss. The theory from what I read originally is you take an even greater hit by using even number of disks in RaidZ1. Using this calculator, 8 disks require 9.15TiB for parity and padding whereas 8 disks require 7.2TiB. 6 requiring 8.7TiB while 5 requires 7.25TiB and 7 requires 7.9TiB.
     
    I've only used this calculator for my two volumes and it's pretty accurate for me, I can't speak on this particular use case however.
  2. Informative
    Lubi97 reacted to Mikensan in FreeNas/unRAID? Need Plex/VMs/Webserver   
    Depending on what you will use each volume for will dictate which RAID you go with. If you're not going to store VMs on the array, then RaidZ2. If you're going to store VMs on it then I'd suggest Raid10. However with SSDs.. IOPs aren't much of a concern so even those you could run in RaidZ2 and get more-than-decent speeds. Sounds like you already have the xeon, I'm surprised you're even doing 20% lol.
     
    It's good you have that motherboard since according to google it supports up to 128gb of RAM - you may want to consider at some point throwing more RAM at it. RAM is usually the first thing you run out of with VMs.
     
    You can get a good HBA card off of ebay, the LSI 9211 or LSI 9220 or IBM m1015. Should be able to get them around $60us and lately I've seen most of them pre-flashed to IT mode (meaning no RAID options, just pure passthrough).
  3. Informative
    Lubi97 reacted to leadeater in Wd blacks in raid for a nas   
    Not for FreeNAS, onbaord SATA is fine and if you need more ports an HBA is necessary not a RAID card. Some RAID cards can be flashed to what's known as IT mode which is HBA firmware.
  4. Informative
    Lubi97 reacted to scottyseng in LSI MegaRaid controller with 10TB drives?   
    Yep, SAS2 (6Gb/s) works fine with SATA3 (6Gb/s). Yeah, it's very confusing with the terminology. 
  5. Informative
    Lubi97 reacted to leadeater in LSI MegaRaid controller with 10TB drives?   
    For sequential reading and writing RAID 6 is faster because there are more active disk spindles that can be used for I/O operations, RAID 10 you lose half of them. Modern RAID cards such as yours have much faster processors on them so can do parity calculations very quickly and with write-back cache the overall achievable performance is very good, can be much higher than RAID 10.
     
    RAID 10 is still recommended for high random write I/O tasks were latency consistency is important such as database servers. However this is becoming less true, for quite a few years now all enterprise disk systems pooled disks in a RAID 6 like manor.
     
    As for redundancy and reliability of RAID 6 vs RAID 10 in practical real world terms both are effectively as safe as each other. RAID 10 can become less safe than RAID 6 if there is a very large number of disks in the array because the chances of a 2 disk failure in the same mirror happening increases, this usually happens after a disk has failed and you put in a new one and start the rebuild.
     
    The most common time for disks to fail is during a rebuild as disks get stressed much more than usual for a long period of time and if a disk fails during the rebuild and it's a RAID 10 array the whole array is dead, if it's the partner disk in the mirror pair being rebuilt but that is the most likely disk to fail. RAID 5/6 have their own downside too for array rebuilds as every disk in the array gets stressed during a rebuild and if all the disks are old you can get a cascading disk failure and lose the array.
     
    Basically the most risky time for a RAID array is a rebuild after a failed disk has been replaced, rather ironic if you think about why RAID exists in the first place.
     
    For some actual performance differences of RAID 10 vs RAID 6 for the type of use case you have @scottyseng should be able to give you those, he converted over to RAID 6 a while ago.
  6. Funny
    Lubi97 reacted to kjrocker in Motherboards for 12-bay 3U Chassis   
    Hi! Do I have to apologize for necromancy if I'm OP? Not sure.

    After a great many shenanigans involving the CPU cooling, the CPU itself, and the BIOS chip, I've finally been able to both boot unRAID and close the case. Now only the first row of hot swap bays is registering.

    So I bought the following cables:
    2 of these:  http://a.co/7LX74v9
    1 of these: http://a.co/4wqwEpT

    The bottom row (the working one) is using the 4-to-1 cable, the other two rows (the non-working ones), are using the 1-to-1 cables. I suspect this is not a coincidence, but can't tell where I went wrong.
     
  7. Informative
    Lubi97 reacted to leadeater in Motherboards for 12-bay 3U Chassis   
    A SAS connector actually has 4 channels in it which is why a SAS to SATA cable has 4 SATA connectors on it. Each channel is at the rated speed of the SAS controller so if it's a SAS 6Gb controller their is 4x 6Gb channels in the SAS connector so the 4 SATA connections that come off the conversion cable will all be 6Gb.
     
    So a dual port 6Gb SAS card has 2 x 4 x 6 = 48Gbps of bandwidth in total which is the same bandwidth as 8 6Gb SATA, math .
  8. Like
    Lubi97 reacted to Jarsky in Motherboards for 12-bay 3U Chassis   
    Just to clarify, each port is 4 channels So 2 ports would be 8 channels @ 6Gbps each.
    Just keep in mind that SAS is expandable and can be chained similar to USB as well.
     
    If you have a 2 port controller, you could have a 6 port SAS expander on each - 1 port becomes the uplink/aggregate, and the remaining 5 ports can connect 20 SATA drives. Keep in mind each set of those 20 drives, are sharing a 24Gbps uplink each to the controller.
  9. Agree
    Lubi97 reacted to Lurick in RAID Card recommendations   
    For me it's been WD Reds. Had 24 running for almost 2.5 years and never had a single issue  
  10. Informative
    Lubi97 reacted to leadeater in Motherboards for 12-bay 3U Chassis   
    That board only has 2 SAS ports so you'll have to use a reverse breakout to 4 SATA ports on the motherboard for the third backplane connector. Not actually a problem just letting you know so you don't forget to get that cable along with 2 SFF-8087 to SFF8643 cables. I prefer actual SAS connections too since the cables are much nicer than SATA or SATA break outs, pitty that board doesn't have 4 SAS ports.
  11. Informative
    Lubi97 reacted to leadeater in Motherboards for 12-bay 3U Chassis   
    You can use reverse break out cables to go from SAS to 4 SATA. Needs to be reverse when going from a backplane to motherboard SATA, it'll have 3 SAS SFF-8087 ports on the backplane so 3 cables will do the job.
  12. Informative
    Lubi97 reacted to Jarsky in Motherboards for 12-bay 3U Chassis   
    The speeds are correct, but the logic is wrong. SAS works on channels. a single SAS channel delivers 4 SATA channels regardless of revision.
    You can connect SAS expanders to the controller to gain more ports, but you cannot split channels amongst drives like that.
  13. Funny
    Lubi97 reacted to Nath_davies in Hard Disk almost always at 100% usage in windows 10   
    Find out next time on Dragon Ball Z!!!
  14. Agree
    Lubi97 reacted to PCGuy_5960 in Ryzen 5 1600x Vs Upgrade to i7-7700   
    Get a Ryzen 5 1600 non X with an AsRock or ASUS board, don't get MSI for Ryzen.
  15. Funny
    Lubi97 reacted to RadiatingLight in Ryzen 5 1600x Vs Upgrade to i7-7700   
    get the 1600 instead of the 1600X. both OC to the same number.
     
    second, if you are gaming, then the 7700 is better, but any streaming gives a huge advantage to the 1600.
    depends what you want. I'd personally get the 1600, especially since the AM4 socket will be alive until 2020, so ZEN+ and ZEN2 will launch on AM4. intel, however, will be killing the 1151 socket with covfefe lake.
  16. Informative
    Lubi97 reacted to hl2master in RAID Card recommendations   
    HBA means host bus adapter. As far as I know, cards that are advertised as HBA cards don't have any RAID functionality, unless you're going to use software RAID through your OS (I don't recommend it, especially in Windows, since carrying over a RAID array to another hardware combo can be troublesome). If I'm correct, software RAID is what @LinusTech used to use on Whonnock server before the NVMe SSD upgrade. Whonnock used to have three separate hardware RAID 5 arrays on three separate RAID cards, an those three arrays were striped (so RAID 0) in Windows using software RAID, meaning if any one of the three RAID 5 arrays dropped out, data was lost.
     
    Also, for my personal machine, I run eight 120 GB SSDs in RAID 0 on an LSI 9260-8i. It's actually very similar to what @LinusTech used to run on Personal Rig Update 2012, but with SATA III drives (doesn't matter anyways, since the 9260-8i  is only capable of SATA II).
     
    As for the 9207-8i, it doesn't seem to support RAID at all, since it's just an HBA card (check this link out: https://www.broadcom.com/products/storage/host-bus-adapters/sas-9207-8i#specifications). Getting something like a used 9260-8i on eBay for a bit less than $100 seems like the best option (look for ones with battery backups included, since they can save your butt in striped RAID levels with Always Read Ahead enabled).
     
    As for RAID levels, RAID 1 would be fine if you don't mind the lack of speed improvements, and need the utmost redundancy (RAID 1 can loose all but one drive without any data loss). RAID 5 can be a good option if you want both fault tolerances (up to one drive) and read speed improvements (parity levels like RAID 5 and 6 can impact write speeds due to the parity drive requiring more processes to be done on the RAID controller). Also consider RAID 6 if you want two drive tolerances at the expense of slower speeds.
     
    Also, don't forget to buy cables. Most used RAID cards on eBay don't include any. Look for Mini-SAS (also called SFF-8087) to SATA cables on Amazon or somewhere else.
  17. Informative
    Lubi97 got a reaction from vadumis in RAID Card recommendations   
    Not having any idea about the RAID cards, there is a little flaw in your statement:
     
    In RAID 10 it is not guaranteed that your data is still intact when 2 drives fail. The moment 1 drive fails you are at the risk of loosing data if the wrong drive fails (33% chance).
     
    Explanation: In RAID 10 you have 2 mirrors (RAID 1) of one striped array (RAID 0)
    So essentially in RAID 0 you would have:
    A1 and B1
    If one fails, all is lost.
    In RAID 10 you have:
    A1 and B1 - and - A2 and B2
    If now A1 fails, there is no problem, because you still have a copy -> A2.
    But if after that A2 fails, B1 and B2 will not be able to help you in any way.
     
     
    For just a storage system I would very much consider something like RAID 6 (where 2 different drives can fail), as rebuilds of a big RAID array may take some days, during which all the drives are active and the risk of another one failing is severely higher. (Also since you seem to have a rather big budget (talking about Threadripper and the like), you might want to consider FreeNAS, as it is both speed and reliable. (It's just a tad more expensive to implement, since the ZFS (the filesystem used) relies on ECC memory. The RAID 6 equivalent would be called RAID Z2 in FreeNAS.
    I am currently going for a RAIDZ2 with 4x 6TB and will upgrade, if and when I need it (or if I get a good deal) to 7x 6TB with RAIDZ3 (so 3 drives can fail).
     
    Don't get me wrong, RAID 10 is great, I have a RAID 10 array of 4 3TB WD Reds in my main rig as well. It delivers faster performance than RAID 5 and is way easier to rebuild, since the only thing happening during a rebuild ist copying the data from A2 to the new A1. But as a storage system for very important data, I would not use it.
     
    Again, since you don't seem to have a budget problem, I would also very much encourage you to get a separate storage server (or NAS) rather than putting all of that in your main rig. (Maybe even better: do both for your most important data.) But above all else: a backup is key to prevent a single point of failure.
     
    All of this was kinda stitched together while watching TV, but I hope I could help you at least a little bit.
     
    BTW: Why do you need a RAID card after all? I wouldn't necessarily recommend a motherboard-RAID like Intel's, as they can be very fragile when it comes to BIOS updates, but why not go for software RAID (like I mentioned FreeNAS - or if speed is not that important UnRAID (which I haven't used yet, but I hear great things about - but also has licensing costs I believe)).
  18. Informative
    Lubi97 reacted to scottyseng in RAID Card recommendations   
    1. Technically FreeNAS is a little more reliable as it has the ability to error check files for corruption (It relies on memory though, hence the ECC requirement). Hardware RAID is very rock solid (It also does error checking, but not as good as ZFS (FreeNAS) and BTRFS) though to be honest it's still good, otherwise it wouldn't be used in the enterprise still.
    2. You can use a HBA card (basically a RAID card without the RAID) for FreeNAS or any other software RAID.
    3. These days the CPU is fast enough to tolerate the overhead of RAID.
    4. You can connect non RAID drives, but only if you put them in RAID0. You can't just connect a non RAID drive and run it on hardware RAID. That's what HBAs are for (They are direct pass through).
     
    However, hardware RAID is good if you have a Windows based system and don't want a separate NAS to run the RAID. Also, hardware RAID is really easy to move between systems and is pretty much plug and play. I've moved my RAID array between my PC and server a few times while migrating data and it did not skip a beat at all.
  19. Like
    Lubi97 got a reaction from scottyseng in RAID Card recommendations   
    I was worried, that nobody answered the questions he had after my post, but I see you handled that very comprehensively! Great post!
     
    Without ECC memory, FreeNAS (and ZFS) can not deliver the definite data-integrity they advertise. ZFS heavily relies on ECC memory. I don't know if UnRAID does too, but maybe that would be a better option. Somebody else should help you with, wether your old PC is usable for something like that, as I don't know.
     
  20. Informative
    Lubi97 reacted to NZLaurence in Virus spreading through NAT in virtualbox? Possible?   
    Nat only provides inbound protection.
     
    This means if you set you virtualbox to a nat type connection then your VM can access the IP's of you PCs but the PCs cannot get back to the VM.
     
    Please note, the default setting on virtual box is bridged.
     
    Also it would be trivial for the virus to do a trace route out to the internet and then to scan all private subnets between it and the internet allowing it to infect all your PCs.
     
    If you want to do sandbox testing, either no internet or use vlans and acls on your router with fully segregated networks. This is interesting with port mirroring with wireshark to see the raw virus traffic.
  21. Informative
    Lubi97 reacted to Electronics Wizardy in Windows RAID 0 vs BIOS RAID 0   
    raid in storage spaces in windows is much better than mobo raid.
     
    They both use the cpu, so there isn't a cpu usage advantage there.
     
    Software raid in storage spaces support checksumming for bitrot, easy expansion,  migration across systems, better caching and compression, and more reliable.
  22. Agree
    Lubi97 reacted to Spuriae in Copying reddit   
    The standard forum layout is better for linear conversations; Reddit's system is better for nonlinear discussions. Things like general advice threads are great on Reddit since quality responses will rise to the top (and off-topic discussions can be easily hidden), while things like build logs and guides work better on a forum.
     
    You can definitely build community here that you can't on Reddit, but the signal to noise ratio is going to be worse as well. 
  23. Agree
    Lubi97 reacted to Revan654 in [Build Log] Project Frost - Case-Labs THW10 | X99 Watercooled | i7 6950X | Titan X | Borosilicate Glass Tubing   
    wow, Why so defensive. Time to move on from this discussion.
  24. Informative
    Lubi97 reacted to LegendOfZirkle in Blue Yeti Help (FIXED)   
    I edited my original post I had to reinstall my audio drivers
  25. Like
    Lubi97 got a reaction from App4that in GTX 980 Ti Thermal Pads   
    So, what thickness do you think is appropriate?
    https://www.caseking.de/en/search?sSearch=wärmeleitpads
    https://www.caseking.de/en/search?sSearch=thermal+pads
     
    So you are missing the metal part, that distributes the heat, so it can be transfered away, by the fans?
    Haha  Sorry!
    Well, you have cheaper tech... But then again... Healthcare 
×