Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Nick7

Member
  • Content Count

    85
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About Nick7

  • Title
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Beware of enterprise disks for storage systems. Some time ago I had a chance to play a bit with several disk enclosures with disks which were used for HP EVA8400 storage system. I had them connected on x86 server running Linux. I got it working with ZFS, and all was good. But, there was one interesting thing - when disks encountered errors on read, it would just send soft error warning and actually give WRONG data to the OS! I guess firmware of EVA8400 knows how to handle this properly, but having same disk connected to x86 server yielded such result. Luckily, using RAID6 and ZFS - ZFS itself noticed checksum errors (yay for checksums!), and corrected them.
  2. Did you check status from OS using hpacucli?
  3. 1st: ditch any disk below 1TB size Now you have something to work with. However, Windows ain't as flexible as Linux, or other *NIX systems. With 4x1TB drives, you van make RAID5. Even though they are not same size, same RPM, they will work fine for home-use. However, I do see one possible issue, and that are 1TB Seagate laptop drives - it's highly possible they are SMR drives, which are unsuitable for RAID, especially in ZFS. Do check that. Last, 3Tb drive - use it as 2nd copy (backup) of RAID5 (of 4 drives).
  4. By your requirements - you need dual boot, not virtualization. It's much simpler, and it'll work much better.
  5. Firstly, your CPU does not support ECC memory. Second, I suppose you read this whole thread. Many people say ECC is almost 'required' if you value your data. Well, ECC generally is good idea. Yes, it can help those odd situations you may get corruption. But honestly - how often does this happen? I've worked in telco company which has about 1000 servers. In my 20 years working there, we had just few situations where RAM sticks have gone bad - or actually reported ECC errors. For vast majority of home users, there is no real worry of such issue happening. It's like 1 in 10000 chance you'll be hit by this situation - and honestly, much higher probability you'll lose data to other sources - primarily by user error, then by some other HW or SW issue. So again - ECC is good thing. Is it required so your data doesn't miraculously just disappear in a week? - No.
  6. Did you check raid utility for raid box that everything is working fine there? Is AL FC loop up? Maybe you have set your FC card only in fabric mode (not AL)?
  7. I'm currently using such a setup. Most of my games are over iSCSI, and 1Gbit network. On NAS side it's Linux with 3x3TB drives in RAIDZ1 with ZFS, and iSCSI device as a file on ZFS filesystem. I did create bcache layer under ZFS for some SSD speedup on NAS side, and this does help a lot. Overall, it's working quite fine. It's faster than HDD, but ofc, slower a bit that SSD in laptop/computer. I'm quite happy how it works, though having 10Gbit network would be nice. With 1Gbit network you'll get upto ~100Mbytes/sec transfer speeds.
  8. Yes, with mdadm it's doable. However, I'd maybe suggest to do concatenate (LINEAR in mdadm) on smaller drives, and than RAID0.
  9. Nick7

    SSD Caching

    For Windows and easy to use is - PrimoCache. You can download free trial to test it out.
  10. One of main reasons why I stayed on Windows on my main PC is RDP/Terminal server - aka, I can work on something, connect remotely and continue, while my wife can at same time log on to same PC and do her work. So far I known of no real alternative for this on Linux, but I might be missing something... so, anyone knows if something like this is actually out that works, and is not a hassle? (hint: using virtual servers, connecting to them, and similar, even from local desktop is not proper replacement)
  11. RAID6 >>>> RAID1 (10) for protection. All storage vendors (IBM, Hitachi, EMC, Fujitsu...) for their storage systems recommend RAID6 on SATA drives, or drives bigger than 1TB. Performance penalty vs RAID5 is about 10%, but this is alleviated by other means (tiering, cache, SSD cache or similar). For best performance RAID10 is the king, and that is a fact - but it's NOT used as most reliable and safe RAID version. So, if you want best performance, use RAID10. If you want your data to be safe - use RAID6.
  12. Just a small side-note: z1 = 1 parity (aka RAID5) z2 = 2 parity (aka RAID6) z3 = 3 parity (no standard RAID that has 3 parity)
  13. Main difference is, what you did is tiering with SS, and that's not caching. Tiering: all capacity SSD+HDD is usable, and data is moved to/from SSD based on 'heatmap'. It's post process job. Caching: only HDD capacity is usable, SSD is used as a cache - be it read only or read/write. It's instant. In case of read-only cache losing SSD does not yield data loss. In case of read/write cache, data in write cache is lost. With tiering, losing SSD is losing all data from that pool.
  14. FreeNAS will work fine with ESXi. You will just not have extra error checks by each device, and possibility to correct errors if any arise. This happens in really rare occasions, so nothing really that much to worry about. If I were you, I'd put all available space on one DS, and use as much space for FreeNAS from there as I need.
×