Jump to content

Eric1024

Member
  • Posts

    675
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    Eric1024 got a reaction from Master Sonic Wisebeard in Hard Drives for Home Servers   
    Hold up a second. What are you going to be using this home server for? If it's just for media serving and backups then getting enterprise drives is a huge waste of money.
     
    A single WD Red (which is a 5,400 RPM, consumer, nas-grade drive) can saturate a gigabit network without a problem, so unless you have 10GbE or infiniband running through your entire house, then you're going to just be wasting money. For reference, I have 6x WD reds in raid-z2, which is basically software raid 6, and the array can read at 340MB/s and write at 390MB/s, which is leaps and bounds ahead of gigabit.
     
    Now, that all being said, if you DO have a need for that kind of speed and you DO have a high speed home network, then the drives you should be looking at are WD SE drives. They're enterprise sata drives, and the 3TB version is going to run you a little under 300$ each. For comparison, WD Reds are 150$ for the same capacity.
  2. Like
    Eric1024 got a reaction from Ginz in ZFS from A to Z (a ZFS tutorial)   
    Edit: There's enough info here for this post to start being useful, but it's still not anywhere close to done. Any chance this can get stickied? 
     
    @wpirobotbuilder @looney @alpenwasser here it is!   Thought you guys would be interested in this.
     
    If you notice any errors in my information or find important information that is missing or have any questions, please ask! I will be maintaining this thread for as long as I am a forum member.
     
    1 What is the purpose of this tutorial?

    This tutorial attempts to thoroughly cover the basics of ZFS: What it is, why you would want to use it, how to set it up, and how to use it.
     
    This tutorial is meant to be understandable by someone with no prior knowledge of ZFS, although a basic knowledge of *nix operating systems is expected for installation section.

     
    2 What is ZFS?
     
    3 Why should I use ZFS?
     
    3.2 Cost

    Cost is always a limiting factor. Hardware RAID requires expensive controllers, some software RAID solutions require you pay for the software (flexRAID), and most solutions encourage the use of enterprise (read: SAS) class disks because of their higher speeds and lower read/write error rates.
     
    ZFS, by contrast, is free, encourages the use of cheep disks (read: SATA), and solves the issues of read/write errors in software instead of hardware, making it incredibly cheep. The only potential for extra cost in ZFS implementation arises from ZFS's desire for lots of ram and a fast cache drive (usually an SSD). See sample configurations in section 10 for a better look at cost analysis.

     
    3.3 Speed

    "So if ZFS is redundant AND cheep, it can't be fast, can it? There's no way you can have all three," you must be thinking.
     
    Well I'm here to tell you that you thought wrong! Implemented properly, ZFS can be faster than just about all forms of hardware and software RAID.
     
    ZFS achieves its speed through caching, logging, deduplication, and compression. 
     
    3.3.1 Caching & Logging
    By default, ZFS will use up to 7/8 if your system memory as a kind of level 1 cache and can be configured to use a fast disk (read: an SSD) as a kind of level 2 cache. In ZFS terms the memory-based cache is know at the ARC, or adaptive replacement cache, and the fast disk based cache is known as the L2 ARC, or level 2 ARC. ZFS also has an optional ZIL, or ZFS Intent Log, a dedicated partition of a fast disk that ZFS uses to store transactions temporarily before they are read or written to the disk in bursts.
     
    3.3.2 Deduplication & Compression
    ZFS natively supports both deduplication and compression. Deduplication, for those who don't know, allows for copies or near copies of data to be stored as a reference to the original data instead of as a straight copy, which saves space at the cost of a bit of CPU time and a lot of memory. Deuplication and compression can be enabled for an entire ZFS pool, or just a few datasets. I'll get into datasets later, but for now just think of them as folders.


     
    4: Why shouldn't I use ZFS?

    There are a few important caveats that one should be aware of before use:
     
    4.1 License Incompatibility
     
    4.2 ZOL is Incomplete
     
    4.3 No Defragmentation
     
    4.4 Hardware

     
    5 What do I need to use ZFS?

    I alluded to this topic a bit in the last section, but here I will explicitly define ZFSs requirements.
     
    5.1 Operating System
    5.2 Drives
    5.3 Memory
    5.4 Other Considerations

    5.4.1 Motherboard

    For a low end ZFS server, any cheep consumer motherboard will do, however for mid-range or high end builds, server motherboard should be used for their ECC memory and buffered DIMM support. See section 10 for more info.

    5.4.2 CPU

    ZFS is generally not too computationally intensive, but if you want to use some of its advanced features (compression, deduplication, L2ARC, etc) it will need more horsepower to ensure ideal performance. Low end and lower mid range systems will work well on an Intel Pentium or i3 or AMD APU. Upper-mid-range systems will need quad core chips such as a 4C Xeon E3, 4C Xeon E5 (E5 series supports buffered dimms (needed for >32GB of memory)), or 4C Opterons. High-end systems will need one or more higher-core count Xeon E5s or Opterons.



     
    6 How do I get started?

    6.1 Installation

    6.1.1 Debian 7
    6.1.2 Ubuntu 12.04
     
    6.1.3 CentOS 6.x
    6.1.4 Solaris

    1. do nothing
    2. ZFS is native
    3. ?????
    4. profit

     
    6.1.5 FreeBSD

    1. do nothing 2. ZFS is native 3. ????? 4. profit

    6.2 Basic Setup

     
    7 The Zpool

    7.1 Datasets
    7.2 Snapshots
    7.3 Scrubbing
    7.4 Resilvering
    7.5 Pool Monitoring
    7.6 Expanding a Zpool
    7.7 ZFS Raid Levels

     
    8 Performance Analysis

    <nothing here yet>

     
    9 Further Reading

    9.1 ZFS Administration Best Practices
    9.2 ZFS Primer
    9.3 ZOL Website
    9.4 Anandtech SSD Reviews
    9.5 ZFSonLinux Setup Guide

     
    10 Sample Setups

    Please note that these are only sample setups. If you notice hardware/brand bias in these builds, it's not because I don't like brand X or that I think brand Y performs better than brand Z, it's just because these are brands that I happen to use. These are just sample builds.
     
    To be perfectly clear, THESE ARE JUST EXAMPLES. I don't want anyone in the comments telling me "If you swap x for y in the build it will be $10 cheeper." That's not the point of this. These are just ballparks.
     
    Also note that if anyone has specific system requirements/brand preferences, tell me what you're looking for in the comments and I will gladly throw together some system specs for you. (eg: "I want a 16TB box and I prefer seagate, AMD, and kingston. I also want it to be fast to I can run VMs on it.").
     
    Also, if you have specific questions like "how should I partition my SSD for mamimum performance?" please ask.
    10.1 Entry Level: $600, 3TB

    CPU: Pentium G3220: $ 60mobo: ASUS H81M-K mATX board $ 60memory: 4GB of cheep DDR3 $ 40storage: 2x3TB WD reds $ 260boot drive: 60GB Kingston SSDNow V300 SSD $ 70case: NZXT Source 210 $ 40PSU: Corsair CX 430 $ 45 Throw the 2x3TB drives in a mirror and you've got yourself a nice, low power box with 3TB of available storage.

    10.2 Lower-Mid range: $1200, 9TB

    CPU: Core i3 4130: $ 130mobo: Supermicro MBD-X10SSL-F-O $ 170memory: 2x4GB of unbuffered, ECC memory $ 105storage: 4x3TB WD reds $ 535boot drive: 240GB Crucial M500 $ 150case: NZXT Source 210 $ 40PSU: Corsair CX 430 $ 45  Put the 4x3TB drives in raidz (raid 5). Use 100GB of the M500 for boot, 20GB for swap, 60GB for the L2ARC and 1GB for the ZIL. Leave the rest of the drive empty so that it will still get high IOPS when the L2ARC is full.
     
    Note: I went with an M500 specifically because it has power loss data protection (aka capacitor banks to flush cache), which I felt was important for a server. It's also very cheep given its size. 

    10.4 Mid range: $1800


    10.5 Upper-Mid range: $2800, 24TB

  3. Like
    Eric1024 got a reaction from MooseCheese in Is running a RAID 5 array in my personal rig viable?   
    1. You really don't need a raid card. Software raid is plenty fast assuming you have a reasonably recent CPU.
    2. For sequential performance, you can expect up to 2x the read and write speeds of one drive, as the data is being striped between two disks at all times (third stripe is parity and doesn't count as data bandwidth as parity is metadata). Given that you only have 3 disks (as opposed to more), scaling should be pretty good. Random performance will be about that of one disk, possibly a little more depending on the IO size.
    3. There's nothing wrong with raid in a personal rig. (I did it for a while, until I got a nas/san)
    4. Yes, software raid, for the low low price of $0 
  4. Like
    Eric1024 got a reaction from Vitalius in Looking for light servergrade memory and motherboards   
    @alpenwasser, yeah I'm still here. I've been crazy busy with my first year of college, but I'll have more free time over the summer and I'll hopefully be more active on the forums. I'm actually spending my summer doing filesystems research with a professor, so should be fun
     
    On topic though... I haven't read through this whole thread carefully so excuse me if I miss something, but if all OP wants to to is store a ton of files for personal use (i.e. not under heavy random read/write load all the time) then he shouldn't need a ton of ram. ZFS uses ram for read caching (both data and metadata), write operation amalgamation (transaction groups), and for holding the deduplication table (If you're using dedup). ZFS will scale the size of all these data structures up or down based on the amount of memory you have.
     
    Long story short your ram needs depend on your use case. If you're just storing lots of media for personal use, then you'd be fine with 4-8GB of RAM. If you're using this to host a dozen VMs with dedup'ed images, then you're going to need a lot more (200GB+).
     
     
    edit: Also, your data is far safer under ZFS that under most other consumer-available data redundancy solutions because of its hierarchical checksumming.
  5. Like
    Eric1024 got a reaction from alpenwasser in Looking for light servergrade memory and motherboards   
    @alpenwasser, yeah I'm still here. I've been crazy busy with my first year of college, but I'll have more free time over the summer and I'll hopefully be more active on the forums. I'm actually spending my summer doing filesystems research with a professor, so should be fun
     
    On topic though... I haven't read through this whole thread carefully so excuse me if I miss something, but if all OP wants to to is store a ton of files for personal use (i.e. not under heavy random read/write load all the time) then he shouldn't need a ton of ram. ZFS uses ram for read caching (both data and metadata), write operation amalgamation (transaction groups), and for holding the deduplication table (If you're using dedup). ZFS will scale the size of all these data structures up or down based on the amount of memory you have.
     
    Long story short your ram needs depend on your use case. If you're just storing lots of media for personal use, then you'd be fine with 4-8GB of RAM. If you're using this to host a dozen VMs with dedup'ed images, then you're going to need a lot more (200GB+).
     
     
    edit: Also, your data is far safer under ZFS that under most other consumer-available data redundancy solutions because of its hierarchical checksumming.
  6. Like
    Eric1024 got a reaction from Eyal in 15 3tb or 4tb raid drives   
    Ubuntu sever supports ZFS (see the link in my sig about ZFS, there's a part in that tutorial about installing on ubuntu), and ifs is native on freenas
  7. Like
    Eric1024 got a reaction from Eyal in 15 3tb or 4tb raid drives   
    If you're going to put 14 or 15 consumer drives in raid, PLEASE do not use hardware raid. Use software raid with a beefy FS like XFS, ZFS, or btrfs.
  8. Like
    Eric1024 got a reaction from flibberdipper in Whaler's GPU Folder Build Log - Folding3   
    Sweet project, but curiosity gets the best of me... what in the world is in the bottom left of this picture?
  9. Like
    Eric1024 reacted to brownninja97 in Prepare yourselves, the three new behemoths of nvidia are here   
    *cough* a follow on from this *cough*
     
    This is really great news, and its not a repost goml @KakaoDj .
     
    So this wasnt expected, im surprised by the specs, the architecture is crazy, they barely use any power.
     
    When i saw the amount of cuda cores, memory and clocks i lost it. 
     
    But they are here, brace yourselves. 
     
    Put on some dramatic music. 
     
    Give it up for the
     
    GT 705
     
    GT 710
     
    and lastly
     
    the 
    GT 720
     
    Boosting the uncanny arcs of Fermi and Kelper
     
    Relying on the joys of rebadging.
     
    This is a great moment indeed.
     
    This day should become a international holiday(even though its easter holiday already for some people)
     
    Well then what the **** are we waiting for better dive in.
     
    First up the gt 705
     

     
    This sexy thing is a rebadge of the gt 610 which was a rebadge of gt 520. spicy. 
     
    Let me throw some numbers at you
    48 CUDA CORES 64 BIT MEMORY BUS a jesus tier of only 29watts under full load An awesome process size of 40nm If i was you id wait for the classified or lightning versions of this card, you could maybe push it up by 1000mhz.
     
    Next up the...
     
    GT 710
     

     
    This lady killer isnt a rebadge of the of Gt 620, whats this madness, its a rebadge of the gt 630, actually no its not, this is a cut down gt 630, madness is it, whatever.
     
    Lets get down to the stats
    Uses the genetically superior pci-e gen 2.0 lane. supports DX 11 has 512mb of memory, wait a sec thats less then the 705, i guess the AMD marketers went to nvidia for a day. Thats all i can say about that card, I recommed you wait for a sapphire version, even though this is nvidia im sure a 8gb vram version will be arriving soon after launch.
     
    LASTLY
     
    the new god('s servant),
     
    the new flagship(mascot),
     
    Here it is the GT 720
     

     
    This you might notice is the same as the above card but its superior.
     
    Its not a half rebrand, ITS A FULL REBRAND.
     
    This card has a full 1gb of vram. If you convert that to bytes its 1.0737 X 10^9 which is a big number, if you tried to count to that number it would take... A LONG TIME.
     
    Unlike the ferocity of the above cards, this one is on a whole new level, those cards have a straightly 1 smx unit whereas this dragonslayer has 2 smx units. THATS 200% THE PERFORMANCE, i think.
     
    It also has the DNA of the gtx titan, the 680, 770, titan black, titan brown, every voodoo card and that crazy tall guy from fractal.
     
    Well then its been great but thats everything ive got today lads, follow EVGA's step up plans because im sure you titan wielders will be gunning for one of these beauties soon.
     
    Sources:
     
    http://www.techpowerup.com/gpudb/2578/geforce-gt-705.html
     
    http://www.techpowerup.com/gpudb/1990/geforce-gt-710.html
     
    http://www.techpowerup.com/gpudb/1989/geforce-gt-720.html
     
    @LinusTech  You know you love these cards.
  10. Like
    Eric1024 got a reaction from Mbarton in 15 3tb or 4tb raid drives   
    Sorry for the confusion about filesystems vs storage arrays. My point was that file systems like ZFS and btrfs perform a lot of the functionality that enterprise NAS/SANs do and they do it a lot better than any raid card that's available to consumers.
     
    A lot of people fail to understand that with the huge hard drives we use today, you have to worry about the consistency of the data on the disks just as much as you have to worry about having redundant disks. That's to say that bit rot and hardware failure together are what cause data loss. If you only protect against one or the other you might as well be protecting against neither. Purely hardware raid 6 is still viable, but in a a few years that will go the way of raid 5.
     
    To OP, even if you do use hardware raid (which I don't recommend) I HIGHlY recommend you use a checksumming filesystem like ZFS / btrfs if you have any kind of important files.
     
    Edit: Also, ZFS isn't limited to Open Solaris. ZFSonLinux is alive, well, and has long been production ready.
  11. Like
    Eric1024 got a reaction from IdeaStormer in 15 3tb or 4tb raid drives   
    Sorry for the confusion about filesystems vs storage arrays. My point was that file systems like ZFS and btrfs perform a lot of the functionality that enterprise NAS/SANs do and they do it a lot better than any raid card that's available to consumers.
     
    A lot of people fail to understand that with the huge hard drives we use today, you have to worry about the consistency of the data on the disks just as much as you have to worry about having redundant disks. That's to say that bit rot and hardware failure together are what cause data loss. If you only protect against one or the other you might as well be protecting against neither. Purely hardware raid 6 is still viable, but in a a few years that will go the way of raid 5.
     
    To OP, even if you do use hardware raid (which I don't recommend) I HIGHlY recommend you use a checksumming filesystem like ZFS / btrfs if you have any kind of important files.
     
    Edit: Also, ZFS isn't limited to Open Solaris. ZFSonLinux is alive, well, and has long been production ready.
  12. Like
    Eric1024 got a reaction from wpirobotbuilder in 15 3tb or 4tb raid drives   
    If you're going to put 14 or 15 consumer drives in raid, PLEASE do not use hardware raid. Use software raid with a beefy FS like XFS, ZFS, or btrfs.
  13. Like
    Eric1024 got a reaction from wpirobotbuilder in The perfect RAID setup   
    Yes but CPUs are so fast these days that for a simple mirror it won't matter at all.
  14. Like
    Eric1024 got a reaction from suyspeedy in freeNas build. (new user)   
    This is a good point. I would definitely advise using ECC memory for a system this high-end. It means you'll have to get a server board, but you should be able to keep the processor (Yes, i3's support ECC memory with the right board, and server boards will usually support i3's)
     
    I wouldn't go with a 2011 build because frankly you won't need that much memory and you definitely won't need that much CPU horsepower. The extra memory will only be useful if you're doing lots of random reads and writes. If this is just for a home file/media server, you shouldn't need more than 16GB of memory. For the same reason, you really don't need an L2ARC (SSD) either.
     
    Also, if you're curious about ZFS, check out the guide I wrote on it in my sig, and if you have questions about ZFS or your server in general, feel free to ask me here, there, or PM me.
  15. Like
    Eric1024 got a reaction from Vitalius in freeNas build. (new user)   
    This is a good point. I would definitely advise using ECC memory for a system this high-end. It means you'll have to get a server board, but you should be able to keep the processor (Yes, i3's support ECC memory with the right board, and server boards will usually support i3's)
     
    I wouldn't go with a 2011 build because frankly you won't need that much memory and you definitely won't need that much CPU horsepower. The extra memory will only be useful if you're doing lots of random reads and writes. If this is just for a home file/media server, you shouldn't need more than 16GB of memory. For the same reason, you really don't need an L2ARC (SSD) either.
     
    Also, if you're curious about ZFS, check out the guide I wrote on it in my sig, and if you have questions about ZFS or your server in general, feel free to ask me here, there, or PM me.
  16. Like
    Eric1024 reacted to wpirobotbuilder in freeNas build. (new user)   
    You don't need a switch, you can directly connect two computers with one NIC each.
     
    If you end up wanting all computes to have 10GbE, Netgear has a phenomenally cheap switch for around $850
     
     
    To be fair, you also have internet speeds that legitimize you having an awesome home network
  17. Like
    Eric1024 reacted to Oshino Shinobu in Dont buy cases from corsair!   
    Sorry to hear you had a bad experience, but Corsair is a very good and reliable company. 
  18. Like
    Eric1024 reacted to looney in LTT Storage Rankings   
    oops was a bit busy today with this:
     

     
    Will add you to the list now
  19. Like
    Eric1024 got a reaction from Vitalius in Not getting the expected performance out of my FreeNAS server and I don't know why.   
    That makes sense but if all he's using is gigabit, even if it's a few aggregated links, then it's definitely not the CPU. Opterons may not be the fastest single-thread performers out there but they're more than capable. For reference, I can push 5Gb/s using samba on an ivy i7 with plenty of breathing room, so he should be fine, but it's worth looking into.
     
    Edit: also, it's worth mentioning that ZFS is well threaded, so on the off chance that it is a CPU issue, I doubt it would be with ZFS.
  20. Like
    Eric1024 reacted to -ism in Good US Computer tech universities   
    My math teacher went there
  21. Like
    Eric1024 reacted to wpirobotbuilder in An Introduction to SAN and Storage Networking   
    An Introduction to Storage Networking
    SAN, NAS, networking, oh my!
     
    I see lots of talk about NAS setups in the storage section. Much of it revolves around increasing storage performance through caching and complex RAID arrays, but there's less discussion about the networks they're connected to. If your storage is high-performance, then your network is usually the next bottleneck in the system, especially when multiple users are connected to a NAS.

    There are a couple of concepts that I'll mention, starting with the idea of a Storage Area Network:
     
    A SAN is a network which provides access to storage. Traditional implementations in the enterprise environment will use a dedicated network, and the storage is consolidated into a SAN appliance. Depending on the implementation, a SAN might be sharing individual hard drives, or volumes spread across multiple hard drives.
     
    The network also doesn't matter, but requires hardware and software support. Two of the more common ones are Fibre Channel and iSCSI. Fibre Channel uses fibre cables which provide lower latency than iSCSI, which is typically run over copper cabling using SFP+ or RJ45 connectors. In addition, data is transferred in block form, rather than in file form, and the storage devices appear as local devices.
     
    Can you tell which of these are physical hard drives and which ones are SAN storage?

    [spoiler=Answer]
    Here, the Media drive and the VRAID drive are iSCSI targets. These ones are coming from two separate SAN appliances in our lab.

    This allows for all sorts of neat functionality. Firstly, it allows you to perform Windows Backups to the machine over a network without owning Windows 7 Professional. It also allows you to install programs to remote storage. That remote device, if it's a part of a RAID-protected volume, will be much safer than a single hard drive. Also, since the data is stored in block form, you can format an iSCSI target with whatever format your file system can support.
     
    Next is networking protocols. I won't talk about Fibre Channel since setting up an infrastructure is very expensive and specialized, and really only useful for the enterprise environment. However, I will discuss iSCSI, which has risen in popularity over the years and is usable in consumer environments.
     
    SCSI is an interface that allows one to directly attach storage appliances to servers, where they appear like local storage. iSCSI takes it one step further, making the protocol talk over a network rather than through a dedicated cable. You can use iSCSI on almost any computer in existence. If you're running Windows 2000 or later, your OS comes with an iSCSI initiator by default. Linux has an iSCSI driver, FreeBSD has iscontrol, and you can get one for OS X (doesn't come with one by default).
     

  22. Like
    Eric1024 got a reaction from Vitalius in Not getting the expected performance out of my FreeNAS server and I don't know why.   
    Sounds like this issue isn't necessarily an issue with ZFS. Might be an issue with your network FS as well. 
     
    If copy performance from one dataset to another (I'm assuming these are from the same array?) is giving you 90MB/s of throughput, then you're actually getting combined 90MB/s read and 90MB/s write performance out of the same array, which is 180MB/s of throughput, which is about what I would expect. There's the large file/small file issue still but we can work that our later. Right now we need to figure out where the bottleneck actually is.
  23. Like
    Eric1024 reacted to Vitalius in Not getting the expected performance out of my FreeNAS server and I don't know why.   
    Will do. Thanks for checking back. 
    The Hard Drives are at 10% of the test remaining (each test takes 9-10 hours). 2 have already passed. The other 2 look like they aren't going to be having issues either (SMART data says so anyway).
    After reading the FreeNAS guide again, it mentions that Samba is single threaded. Me having an Opteron isn't going to help it, and I was copying the files between the server, through a switch, to my computer, and back on a Windows machine using a CIFS share. 
    So the network/Samba use could have been the limiting factor. I'm going to try using the terminal to copy files and see if that changes things after I try copying from my computer (to test if it was just RAID 10 not being set up correctly). According to the moderator on the FreeNAS forums, that is the correct way to setup RAID 10 using the GUI (the screenshot I posted). 
    I'm still going to try the terminal first and see if that fixes things. However, it is heavily suggested that you don't use the terminal to do things regarding pool creation because it's "behind the GUI's back" meaning that if you do things with the GUI to mess with pools you made with the terminal, bad things can happen. 
    I'll rebuild the system today, but I'm not going to be doing the testing today. I have 15 minutes until I get off, and I will be getting overtime this weekend when I move everyone from our old file server to this new one, so I have to be careful about getting too much. 
    Expect an update 17 hours from this posting. 
  24. Like
  25. Like
    Eric1024 got a reaction from Vitalius in Not getting the expected performance out of my FreeNAS server and I don't know why.   
    Just go into the terminal and create the pool from there. Then you will know exactly how it is set up. If performance issues persist, PM me. I've been working with a research team doing ZFS benchmarks so I know a good deal about its inner workings. I'm more of a ZFS on Linux guys and not FreeNAS, but they're more or less the same filesystem.
×