Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Eric1024

Member
  • Content Count

    675
  • Joined

  • Last visited

Awards

This user doesn't have any awards

4 Followers

About Eric1024

  • Title
    Storage Fanatic
  • Birthday Oct 28, 1994

Profile Information

  • Location
    San Diego, CA
  • Gender
    Male
  • Interests
    Electrical engineering, CS, IT, digital media, and of course computer hardware
  • Occupation
    Student at Harvey Mudd College

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. 1. You really don't need a raid card. Software raid is plenty fast assuming you have a reasonably recent CPU. 2. For sequential performance, you can expect up to 2x the read and write speeds of one drive, as the data is being striped between two disks at all times (third stripe is parity and doesn't count as data bandwidth as parity is metadata). Given that you only have 3 disks (as opposed to more), scaling should be pretty good. Random performance will be about that of one disk, possibly a little more depending on the IO size. 3. There's nothing wrong with raid in a personal rig. (I did it
  2. TL; DR waste of money, your wouldn't be able to generate that much disk bandwidth if you wanted to. This is just math. Even if you're using absurdly high quality audio sources, you'll probably never have anything with a bitrate higher than 10Mb/s. That means even if your software were reading 100 audio source files at the same time, it would only be pulling 1000 Mb/s == 1Gb/s == 125MB/s off the disk. Realistically, with 30-40 tracks, still assuming an outrageous bitrate of 10Mb/s, that's 38-50MB/s. Even a sata2 SSD can read at ~250MB/s, which is 5x faster. Also, you're forgetting that
  3. No one has mentioned the reason why it's actually useless to defrag an ssd: you can't. There's a layer of abstraction between the block addressees that the file system sees and the actual location of each block in the flash. The ssd controller keeps a mapping of this translation in order to be able to read the same logical blocks as the physical blocks move. SSDs need this abstraction layer so that they can speed up read-modify-write operations. They redirect the write to a clean page somewhere instead of waiting for the old page to erase since erasing is slow. In essence, the flash contro
  4. I would highly encourage you to do it more often than that
  5. @alpenwasser, yeah I'm still here. I've been crazy busy with my first year of college, but I'll have more free time over the summer and I'll hopefully be more active on the forums. I'm actually spending my summer doing filesystems research with a professor, so should be fun On topic though... I haven't read through this whole thread carefully so excuse me if I miss something, but if all OP wants to to is store a ton of files for personal use (i.e. not under heavy random read/write load all the time) then he shouldn't need a ton of ram. ZFS uses ram for read caching (both data and metadata
  6. Ubuntu sever supports ZFS (see the link in my sig about ZFS, there's a part in that tutorial about installing on ubuntu), and ifs is native on freenas
  7. Sorry for the confusion about filesystems vs storage arrays. My point was that file systems like ZFS and btrfs perform a lot of the functionality that enterprise NAS/SANs do and they do it a lot better than any raid card that's available to consumers. A lot of people fail to understand that with the huge hard drives we use today, you have to worry about the consistency of the data on the disks just as much as you have to worry about having redundant disks. That's to say that bit rot and hardware failure together are what cause data loss. If you only protect against one or the other you mig
  8. Unless you're using a controller that's capable of scrubbing, then no, filesystem level raid is better for data safety, and the speeds are comparable if not the same in most situations.
  9. Basically all hardware raid controllers currently available to consumers don't have features that the robust FS's I mentioned have.
  10. If you're going to put 14 or 15 consumer drives in raid, PLEASE do not use hardware raid. Use software raid with a beefy FS like XFS, ZFS, or btrfs.
  11. It depends on the NAS and your home network, but best case you're looking at gigabit speeds (limited by the network), which means about 125MB/s. That's gonna be limited by your upload speed to your ISP, which you can find by running a speed test. For access, yes. You can configure the NAS so only one person can access it at a time, but that's generally not the default. And as far as the NAS is concerned, accessing from home and remotely aren't any different. Two people can read from the same file at the same time, but not write. They can be purely on the NAS.
  12. ZFS will protect your data a lot better than hardware raid, and performance won't be an issue. See my guide in my signature if you want to know more.
  13. Yes but CPUs are so fast these days that for a simple mirror it won't matter at all.
  14. If you can spare the money, I would definitely get a 240GB ssd. They tend to be faster than 120GB drives and the extra headroom is always nice. That being said, if you're only using 50GB of space right now, you'd probably be fine with a 120GB drive. All the drives you mentioned in your original post are good drives, personally I like the 840 EVO but again they're all good.
  15. Short answer: yes. As the drive fills up, it will get slower. This is mostly evident in write speeds and not reads, but it can affect read speeds to a lesser degree. Long answer: For an SSD to delete or modify a 4k page of data on the drive, it can't just overwrite it. It has to erase the block that the page is in (I believe a block is composed of 64 pages, but I would check that number) and then write out the newly modified page along with all the untouched pages out to freshly erased block. Let's dive into that in a bit more detail: Say you have a word document that's 64KB in size.
×