Jump to content

This is, by far, thee most expensive upgrade I have ever made. FULL SOLID STATE SERVER (technically not...mechanical array for backup), 12TB of solid state storage using Intel DC S4500s.

IMAG0278.thumb.jpg.62c8d74ae764589e4c2d80f50b0b25f7.jpg

My family will never hear the price-tag for all of these...I plan to get them sled mounted and installed tonight. Benchmarks will happen tomorrow. Plans are to test RAID0, 5, 6, & raidz3("7"). This is like Linus's old SSD Server but 1/2 scale and with much more modern drives...and HBAs instead of RAID cards...and no plan to do RAID50...or use Windows Server...OK its not very similar. I am using the same chassis though. Norco RPC-4224. Nice case...except those damn SFF-8087 cables Leadeater you know what I'm talking about. I swear the day that fan fails is the day I'll need brand new cables. Anyways, benchmark results will be posted some time tomorrow for those interested.

  1. 8uhbbhu8

    8uhbbhu8

    $6000 of just drives... damn....

     

    or are you filling the entire case? Then that's $12000 of just drives.... Holy Shit indeed!

  2. Windows7ge

    Windows7ge

    @8uhbbhu8 1/2 scale so only the 12 and I found them cheaper than that. About 28% cheaper (cheaper relatively speaking). The top two rows are mechanical back-up and the 3rd row down I have other plans for. Considering four 12TB drives, breaking them into smaller zvols and using them as large storage in a Windows Server 2016 VM.

  3. 8uhbbhu8

    8uhbbhu8

    Pretty nice setup. My plans eventually are a custom HDD array with SSD cache on both the server and my pc as to eliminate storing anything on my main rig permanently. Everything will be stored on the server(s) and will take from there when I want something 

  4. Windows7ge

    Windows7ge

    @8uhbbhu8 Minus the cache part my setup is identical until I get this solid state array going. What server OS are you working/going with?

  5. 8uhbbhu8

    8uhbbhu8

    Probably windows server cause I have access to it but if I feel like getting it going I may go with freenas.

  6. Windows7ge

    Windows7ge

    @8uhbbhu8 Unless you write your own custom script FreeNAS won't be an option. To use an SSD for write cache requires an SSD to be configured as a SLOG device. This in itself isn't an issue. The issue is that SLOG devices only work with synchronous writes. A array build to serve as network storage uses asynchronous writes. You'll see no performance gain and might actually see performance loss as data will only be written to the SSD as fast and the array can accept it.

     

    If you know Windows Server can use SSDs as cache for file arrays and you can get a key free or cheap go that route. Otherwise, I know UnRAID can do the same but it's not free.

  7. Windows7ge

    Windows7ge

    Less than amazing results. The hardware is more than capable but something is limiting the array to SATAIII. The theory supporting this is going between 1, 2, & 3 parity disks there's no write performance degradation at all. Like you told me @leadeater you predicted I wouldn't see speeds beyond SAS II (RAID0 is a bit of a mystery exception) I did not test over-provisioning but write speed consistency didn't give me any indication that it'd be necessary.

     

    RAID0:
    Writes: 717MB/s
    Reads:  929MB/s

     

    RAID5:
    Writes: 597MB/s
    Reads:  667MB/s

     

    RAID6:
    Writes: 584MB/s
    Reads:  601MB/s

     

    RAIDZ3:
    Writes: 637MB/s
    Reads:  509MB/s

     

    Kind of a bummer but hey on the upside I have more IOPS than I know what to do with. I can run literally anything on any number of VMs and see absolutely no loss of performance between them.

     

    @Electronics Wizardy Beyond this initial testing what settings do you recommend to get the most out of the array? I haven't set it up for permanent use yet. My desire is raidz2, encryption, and we can use lz4 compression. Anything else? Anything special I should look for to tweak it?

  8. Electronics Wizardy

    Electronics Wizardy

    Look at iops aswell with something like fio and other tests. FIO is normally much better at testing disks.

     

    This doesn't appear to be sata limited at all as 667 is faster than sata and your system isn't sata limited.

     

    What did you set ashift to?

     

    but write speed consistency didn't give me any indication that it'd be necessary.

     

    This becomes a issue with consistent writes as ssds do background work. If your not doing a ton of writes don't worry about it.

     

    Is this over the network how did you test it?

  9. leadeater

    leadeater

    That's a bit odd, the fastest you're getting is basically 2 SSDs worth of read performance. Have a look through this and see if you can match the all SSD array shown, https://calomel.org/zfs_raid_speed_capacity.html

     

    Should be getting much more out of 12 SSDs than you are.

  10. Windows7ge

    Windows7ge

    @Electronics Wizardy fio as in FusionIO? I'll have to locate and figure out how to use it. However the necessity of using this as an IOPS testing tool can wait until the array is setup in it's permanent configuration. At this very moment they're JBOD (N/C).

     

    Seeing peaks bordering 1GB/s makes me think it might be as simple as a software configuration. However any form of parity pushed it right down to ~SATAIII.

     

    One host doing a handful of GBs at a time I don't expect it to be a huge issue. If I start throwing VMs on it then it'll be preforming operations in the background which I cannot say if that'll cause any issues.

     

    At first to see what would happen I tried dd. It immediately proved to be completely useless as it gave me results no higher than 130MB/s. The next testing method was Crystal Diskmark over the network but that proved to be worthless since ZFS caches frequent files in ram. I couldn't get any accurate read speeds. The final testing method was to use my general workload and record the peaks. I used compressed files so lz4 did nothing to impact the results though I kept encryption enabled since it is my plan to use it anyways. Not the most reputable testing methodology I know but I serched around the web for a little while and couldn't locate any information telling me how I could benchmark the array within FreeNAS accurately outside of creating a jail (which I've never done) and using some software I've never heard of or know how to handle.

  11. leadeater

    leadeater

    I forget, did you get the Norco chassis with the expander blackplane or without and are direct connecting every drive slot?

  12. leadeater

    leadeater

    @Windows7ge Nah fio and FusionIO are different things, fio is a software package used for disk performance testing and FusionIO is a line of SSD products.

  13. Windows7ge

    Windows7ge

    @leadeater  I'll try it out.

     

    Backplane. The SFF-8087 cables aren't breakout cables. They plug from the SAS/SATA HBAs to the backplanes

  14. leadeater

    leadeater

    Does the backplane actually have an expander though? What's the exact model number of the chassis? Edit: Nvm the ending 4 denotes what I need.

  15. Windows7ge

    Windows7ge

    @leadeater NORCO RPC-4224. And oh, you mean like breaking 1 SFF-8087 cable to run more than 4 drives? No, there's no such controller on the backplanes. Each bay has a direct connection to its own SATAIII link though the backplane.

  16. leadeater

    leadeater

    Doesn't look like there is an expander, if there is a SFF-8087 cable going to each row of drive bays your good. Reason why I was asking was if there was an expander that would cause a bottleneck like you are seeing. Not the case so must be something else.

  17. Windows7ge

    Windows7ge

    @leadeater I know, 1.5Gbps, 3Gbps, 6Gbps that's how the expanders you talked to me about a few months ago worked. Unlike HDDs SSDs do need full 6Gps to get the most out of them.

  18. leadeater

    leadeater

    Yea I posted that as you posted what I wanted to know, couldn't remember :P

  19. Windows7ge

    Windows7ge

    @leadeater Looking over that page I'm not seeing results any higher than what I'm getting. Here's the closest configuration to mine that I saw:

    11x 256GB raid7, raidz3   1.8 terabytes ( w= 659MB/s , rw=448MB/s , r=1681MB/s )
    

    11 SSDs raidz3. Performance results are very close to mine. r= results I can assume are files cached in RAM. This test was performed using Bonnie++. I can setup my array and if I can figure out where to find and how to use Bonnie++ then I can run the same test though I'm not sure how lower/higher it'll be.

  20. leadeater

    leadeater

    R= read, w= write and rw= read and write. The results are a lot higher than you're getting even for your RAID 0 array,

  21. Windows7ge

    Windows7ge

    @leadeater You didn't need to explain the abbreviations. I'm aware. I think we will have a more accurate comparison if I can run Bonnie++ with similar settings. It won't explain how to fix anything but it will at least show how far off I am. I found a website that explains the software and how to use it. Now I have to figure out how to setup a Jail and install the necessary packages.

  22. leadeater

    leadeater

    lol oops dw, I didn't read that sentence/reply very well. Just ignore that part of it, thought you meant r= results.

  23. Windows7ge

    Windows7ge

    @leadeater I setup a jail. Installed Bonnie++. Shared the SSD array with the jail and ran what was called a basic test. This was the output:

    Using uid:0, gid:0.                                                                                                                 
    Writing a byte at a time...done                                                                                                     
    Writing intelligently...done                                                                                                        
    Rewriting...done                                                                                                                    
    Reading a byte at a time...done                                                                                                     
    Reading intelligently...done                                                                                                        
    start 'em...done...done...done...done...done...                                                                                     
    Create files in sequential order...done.                                                                                            
    Stat files in sequential order...done.                                                                                              
    Delete files in sequential order...done.                                                                                            
    Create files in random order...done.                                                                                                
    Stat files in random order...done.                                                                                                  
    Delete files in random order...done.                                                                                                
    Version  1.97       ------Sequential Output------ --Sequential Input- --Random-                                                     
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--                                                     
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP                                                     
    BonniePlusPlus 256G   170  99 730934  88 545596  92   464  99 1645348  99 +++++ +++                                                 
    Latency             56924us    5535us    2769us   25707us     341us    2119us                                                       
    Version  1.97       ------Sequential Create------ --------Random Create--------                                                     
    BonniePlusPlus      -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--                                                     
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP                                                     
                     16 +++++ +++ +++++ +++ 29881  99 30230  98 +++++ +++ 29751  99                                                     
    Latency              2527us     120us     104us   46378us     161us     120us                                                       
    1.97,1.97,BonniePlusPlus,1,1526767385,256G,,170,99,730934,88,545596,92,464,99,1645348,99,+++++,+++,16,,,,,+++++,+++,+++++,+++,29881,
    99,30230,98,+++++,+++,29751,99,56924us,5535us,2769us,25707us,341us,2119us,2527us,120us,104us,46378us,161us,120us

    You'll probably be able to read this better than I can. I can try using the test mentioned in the website you linked: 

    bonnie++ -u root -r 1024 -s 16384 -d /storage -f -b -n 1 -c 4

    Except changing the argument parameters to reflect my system ie. instead of -r 1024 I'd put in -r 131072 though I'm not sure how different the output would be.

     

    EDIT:

    Reading the website in its explanation on what the important numbers mean on the page what I think I'm looking at is w = 730MB/s, rw = 545MB/s, r = 1.64GB.

  24. leadeater

    leadeater

    Yep those look right and about what you should be getting. Though to be honestly I'm surprised how slow all SSD arrays are under FreeNAS/ZFS, even on the site I linked to 24 of them only get 2GB/s where I get near exact scaling with storage spaces. Wonder if there are some special sauce optimizations to get the performance up a lot for ZFS and SSDs.

  25. Windows7ge

    Windows7ge

    @leadeater So I'm getting about what is to be expected? Meh, I went into this project knowing this was a possible outcome so I'm not too disappointed. I may not have the raw speed I was looking for but I always wanted to build one of these. This does mean though to see speeds exceeding 1GB/s on a storage server PCI_e or M.2 drives is the only way to go but I know for the capacity I'm interested in that that is probably a decade away from happening.

     

    Do you know how to view the IOPS? If they're on this page somewhere or if Bonnie++ has a different test to run to determine that? I see μs everywhere I think that stands for microseconds but that doesn't tell you how many IOPS.

×