Jump to content

Eric1024

Member
  • Posts

    675
  • Joined

  • Last visited

Awards

This user doesn't have any awards

4 Followers

About Eric1024

  • Birthday Oct 28, 1994

Profile Information

  • Gender
    Male
  • Location
    San Diego, CA
  • Interests
    Electrical engineering, CS, IT, digital media, and of course computer hardware
  • Occupation
    Student at Harvey Mudd College
  • Member title
    Storage Fanatic

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. 1. You really don't need a raid card. Software raid is plenty fast assuming you have a reasonably recent CPU. 2. For sequential performance, you can expect up to 2x the read and write speeds of one drive, as the data is being striped between two disks at all times (third stripe is parity and doesn't count as data bandwidth as parity is metadata). Given that you only have 3 disks (as opposed to more), scaling should be pretty good. Random performance will be about that of one disk, possibly a little more depending on the IO size. 3. There's nothing wrong with raid in a personal rig. (I did it for a while, until I got a nas/san) 4. Yes, software raid, for the low low price of $0
  2. TL; DR waste of money, your wouldn't be able to generate that much disk bandwidth if you wanted to. This is just math. Even if you're using absurdly high quality audio sources, you'll probably never have anything with a bitrate higher than 10Mb/s. That means even if your software were reading 100 audio source files at the same time, it would only be pulling 1000 Mb/s == 1Gb/s == 125MB/s off the disk. Realistically, with 30-40 tracks, still assuming an outrageous bitrate of 10Mb/s, that's 38-50MB/s. Even a sata2 SSD can read at ~250MB/s, which is 5x faster. Also, you're forgetting that there's the filesystem's cache in between you and your disk. So frequently used files will get read once from disk, and subsequent reads will come from main memory. There's a good chance that your disk will be doing noting at any given time.
  3. No one has mentioned the reason why it's actually useless to defrag an ssd: you can't. There's a layer of abstraction between the block addressees that the file system sees and the actual location of each block in the flash. The ssd controller keeps a mapping of this translation in order to be able to read the same logical blocks as the physical blocks move. SSDs need this abstraction layer so that they can speed up read-modify-write operations. They redirect the write to a clean page somewhere instead of waiting for the old page to erase since erasing is slow. In essence, the flash controller moves the data to a physically different page on the drive, but from the perspective of the file system it hasn't moved. Since the file system doesn't know where blocks actually are on the disk, it can't re-order them in any useful way. Thus, the fs can't defrag the disk. This really isn't a problem though because ssds dont have seek latency, so it doesn't matter if blocks are contiguous or not.
  4. I would highly encourage you to do it more often than that
  5. @alpenwasser, yeah I'm still here. I've been crazy busy with my first year of college, but I'll have more free time over the summer and I'll hopefully be more active on the forums. I'm actually spending my summer doing filesystems research with a professor, so should be fun On topic though... I haven't read through this whole thread carefully so excuse me if I miss something, but if all OP wants to to is store a ton of files for personal use (i.e. not under heavy random read/write load all the time) then he shouldn't need a ton of ram. ZFS uses ram for read caching (both data and metadata), write operation amalgamation (transaction groups), and for holding the deduplication table (If you're using dedup). ZFS will scale the size of all these data structures up or down based on the amount of memory you have. Long story short your ram needs depend on your use case. If you're just storing lots of media for personal use, then you'd be fine with 4-8GB of RAM. If you're using this to host a dozen VMs with dedup'ed images, then you're going to need a lot more (200GB+). edit: Also, your data is far safer under ZFS that under most other consumer-available data redundancy solutions because of its hierarchical checksumming.
  6. Ubuntu sever supports ZFS (see the link in my sig about ZFS, there's a part in that tutorial about installing on ubuntu), and ifs is native on freenas
  7. Sorry for the confusion about filesystems vs storage arrays. My point was that file systems like ZFS and btrfs perform a lot of the functionality that enterprise NAS/SANs do and they do it a lot better than any raid card that's available to consumers. A lot of people fail to understand that with the huge hard drives we use today, you have to worry about the consistency of the data on the disks just as much as you have to worry about having redundant disks. That's to say that bit rot and hardware failure together are what cause data loss. If you only protect against one or the other you might as well be protecting against neither. Purely hardware raid 6 is still viable, but in a a few years that will go the way of raid 5. To OP, even if you do use hardware raid (which I don't recommend) I HIGHlY recommend you use a checksumming filesystem like ZFS / btrfs if you have any kind of important files. Edit: Also, ZFS isn't limited to Open Solaris. ZFSonLinux is alive, well, and has long been production ready.
  8. Unless you're using a controller that's capable of scrubbing, then no, filesystem level raid is better for data safety, and the speeds are comparable if not the same in most situations.
  9. Basically all hardware raid controllers currently available to consumers don't have features that the robust FS's I mentioned have.
  10. If you're going to put 14 or 15 consumer drives in raid, PLEASE do not use hardware raid. Use software raid with a beefy FS like XFS, ZFS, or btrfs.
  11. It depends on the NAS and your home network, but best case you're looking at gigabit speeds (limited by the network), which means about 125MB/s. That's gonna be limited by your upload speed to your ISP, which you can find by running a speed test. For access, yes. You can configure the NAS so only one person can access it at a time, but that's generally not the default. And as far as the NAS is concerned, accessing from home and remotely aren't any different. Two people can read from the same file at the same time, but not write. They can be purely on the NAS.
  12. ZFS will protect your data a lot better than hardware raid, and performance won't be an issue. See my guide in my signature if you want to know more.
  13. Yes but CPUs are so fast these days that for a simple mirror it won't matter at all.
  14. If you can spare the money, I would definitely get a 240GB ssd. They tend to be faster than 120GB drives and the extra headroom is always nice. That being said, if you're only using 50GB of space right now, you'd probably be fine with a 120GB drive. All the drives you mentioned in your original post are good drives, personally I like the 840 EVO but again they're all good.
  15. Short answer: yes. As the drive fills up, it will get slower. This is mostly evident in write speeds and not reads, but it can affect read speeds to a lesser degree. Long answer: For an SSD to delete or modify a 4k page of data on the drive, it can't just overwrite it. It has to erase the block that the page is in (I believe a block is composed of 64 pages, but I would check that number) and then write out the newly modified page along with all the untouched pages out to freshly erased block. Let's dive into that in a bit more detail: Say you have a word document that's 64KB in size. Since most NAND flash is divided up into 4KB pages on a physical level, that means your word document takes up 16 x 4KB pages on the drive. Say you open that document and you modify a few lines in the middle of the document and then save it. What's happened is you've modified one of the pages in the middle of the file (not pages in the pieces of paper sense, pages in the NAND flash / filesystem sense. Confusing nomenclature eh?). Say the data you modified happens to land in the 12th page on the NAND flash. (remember our 64KB file takes up 16x4KB pages on the NAND flash in your SSD). First of all, even though you might have only modified 1KB of data, the entire 4KB page has to be re-written to the drive. This is because NAND flash can't overwrite a page with data in it. It can only write to a 'clean' (i.e. freshly erased or all-zero) page. This is just a trade off that was made in the design of NAND. However, it gets ever more complicated because SSDs can't actually erase 4KB pages, they can only erase chunks of 64 pages (i.e. 256KB of data, called a block) at a time. So, when you go to write out 1KB of modified data to your word document, the SSD has to copy the contents of a 256KB block into its memory, erase that 256KB on the NAND, then re-write everything that was in that block including the mere 1KB of data that you originally modified. However (again), erasing blocks is a slow operation, so after the SSD reads the 256KB block into its memory it writes it to an empty block somewhere else. (i.e. a pre-erased block). The problem with this is that now the SSD controller has to go back and erase that block that it originally read from because the data stored on it is no longer good, i.e. it has been modified and the newest version is stored somewhere else on the drive. This means if an SSD has 20GB of free space from the user's perspective, it may actually have only 500MB of 'clean' blocks that can be immediately written to, and any kind of heavy write or modify activity may require that the drive stop everything its doing to erase dirty blocks. This is somewhat simplified answer, but hopefully it helps explain why SSDs slow down as they get full. The reason hard drives slow down as they get full is actually completely different, and I can get into that if anyone is curious If you're curious about how SSDs work, start here: http://en.wikipedia.org/wiki/Flash_memory http://en.wikipedia.org/wiki/Write_amplification
×