Jump to content

Lubi97

Member
  • Posts

    143
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Location
    Germany

Recent Profile Visitors

951 profile views
  1. Sorry, I might have written that a little fast/confusing, I just meant the ECC part. Multi-Bit errors should be more unlikely than Bit-flips, right? And isn't something affecting the drives is why one would use RAID? All in all I just thought, that OP should read up on that topic.
  2. So does that mean, that if you use a 1SAS-4SATA cable, each of those SATA ports only has a bandwidth of 3GB/s?
  3. I am sorry, but that is a bit wrong. If you want to run FreeNAS or more specifically ZFS, you need to use ECC RAM! ZFS highly relies on ECC RAM. I don't know enough about it, to explain why, but it is very important for data integrity. BTW: For using ECC RAM your CPU as well as your motherboard need to support it. I don't know if that's the case for your CPU. Also what kind of data do you want to put on there? Is it important? Does the NAS have to be fast? Otherwise you don't really need FreeNAS, as the price overhead is pretty high (considering the ECC RAM and compatible hardware).
  4. Then how can you have 1 SAS to 4 SATA cables? Is that just SATA 2 then?
  5. I was worried, that nobody answered the questions he had after my post, but I see you handled that very comprehensively! Great post! Without ECC memory, FreeNAS (and ZFS) can not deliver the definite data-integrity they advertise. ZFS heavily relies on ECC memory. I don't know if UnRAID does too, but maybe that would be a better option. Somebody else should help you with, wether your old PC is usable for something like that, as I don't know.
  6. Not having any idea about the RAID cards, there is a little flaw in your statement: In RAID 10 it is not guaranteed that your data is still intact when 2 drives fail. The moment 1 drive fails you are at the risk of loosing data if the wrong drive fails (33% chance). Explanation: In RAID 10 you have 2 mirrors (RAID 1) of one striped array (RAID 0) So essentially in RAID 0 you would have: A1 and B1 If one fails, all is lost. In RAID 10 you have: A1 and B1 - and - A2 and B2 If now A1 fails, there is no problem, because you still have a copy -> A2. But if after that A2 fails, B1 and B2 will not be able to help you in any way. For just a storage system I would very much consider something like RAID 6 (where 2 different drives can fail), as rebuilds of a big RAID array may take some days, during which all the drives are active and the risk of another one failing is severely higher. (Also since you seem to have a rather big budget (talking about Threadripper and the like), you might want to consider FreeNAS, as it is both speed and reliable. (It's just a tad more expensive to implement, since the ZFS (the filesystem used) relies on ECC memory. The RAID 6 equivalent would be called RAID Z2 in FreeNAS. I am currently going for a RAIDZ2 with 4x 6TB and will upgrade, if and when I need it (or if I get a good deal) to 7x 6TB with RAIDZ3 (so 3 drives can fail). Don't get me wrong, RAID 10 is great, I have a RAID 10 array of 4 3TB WD Reds in my main rig as well. It delivers faster performance than RAID 5 and is way easier to rebuild, since the only thing happening during a rebuild ist copying the data from A2 to the new A1. But as a storage system for very important data, I would not use it. Again, since you don't seem to have a budget problem, I would also very much encourage you to get a separate storage server (or NAS) rather than putting all of that in your main rig. (Maybe even better: do both for your most important data.) But above all else: a backup is key to prevent a single point of failure. All of this was kinda stitched together while watching TV, but I hope I could help you at least a little bit. BTW: Why do you need a RAID card after all? I wouldn't necessarily recommend a motherboard-RAID like Intel's, as they can be very fragile when it comes to BIOS updates, but why not go for software RAID (like I mentioned FreeNAS - or if speed is not that important UnRAID (which I haven't used yet, but I hear great things about - but also has licensing costs I believe)).
  7. @newgeneral10 I don't have a lot of time and also not a lot of experience with this stuff, but just to make you aware: I read somewhere, that you should not virtualize FreeNAS, because, as you probably know, it runs ZFS as a filesystem. And if I remember correctly ZFS has to have total control over the hard drives it is using, so virtual drives might not be the best idea. I would highly recommend you try to read up on ZFS (or virtualizing a system that uses ZFS)! See this comment for example: https://www.reddit.com/r/homelab/comments/3un70f/home_server_virtualization_esxi_or_unraid/d3557dk/
  8. Alright, thanks for the confirmation! (That second part worries me just a little... )
  9. Ah, ok. If they work, I'll make sure. What do you think about trying to post witht the water blocks still attached? If I only go into BIOS it really should be ok, right? Wouldn't make sense to get screws and pads if they are fried anyway.
  10. Thanks! Will do that! Ah, that's unfortunate... Did you contact him again?
  11. So, what thickness do you think is appropriate? https://www.caseking.de/en/search?sSearch=wärmeleitpads https://www.caseking.de/en/search?sSearch=thermal+pads So you are missing the metal part, that distributes the heat, so it can be transfered away, by the fans? Haha Sorry! Well, you have cheaper tech... But then again... Healthcare
  12. Ah, thanks, will look into that. Does anyone else know, where I could get thermal pads, that fit the reference-design coolers of a GTX 980 Ti?
  13. So, do you know, where I could get those pads? (And maybe some screws that are missing, too?)
×