Jump to content

dkuhn

Member
  • Posts

    5
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Contact Methods

Profile Information

  • Gender
    Male
  • Location
    Washington, D.C.
  • Interests
    things
  • Biography
    no.
  • Occupation
    Network Engineer

System

  • CPU
    i7 6850k
  • Motherboard
    Asus X99-Deluxe II
  • RAM
    64GB Dominator Platinum
  • GPU
    2x EVGA GTX 1080 SC
  • Case
    Phanteks Enthoo Evolv Tempered Glass Black
  • Storage
    1x Plextor 128GB M.2; 1x 5TB WD Black
  • PSU
    Corsair AX860i
  • Display(s)
    LG 34UM67-P
  • Cooling
    Custom Hardline Loop
  • Keyboard
    Corsair K95RGB
  • Mouse
    Logitech MX Master
  • Sound
    Speakers / Headset
  • Operating System
    Windows 10 Enterprise

Recent Profile Visitors

371 profile views
  1. Oh my...how far we've come. Here we go again? Some updates...everything posted above is gone except for those 4 8TB HDDs (more on those later) and the Google Search Appliance. Rack is gone, Xserve and Xserve RAID ended up in a dumpster, R710 and the MD1200 was donated to my high school. Those 4 8TB drives started something that hasn't slowed down. I have a problem. I'm (currently) rocking a massive (for me) ZFS cluster with 552TB of storage directly attached to a single host, and another 96TB of "scratch" storage on a Synology, and another 19.2TB all flash Synology for a whopping total of 667.2TB of raw storage. I'm definitely missing a few dozen here or there. Easy to lose track of at this point. Hardware: Server: Dell R720xd (reflashed Google Search Appliance G100 T4) CPU: 2x Intel Xeon E5-2697 v2 RAM: 768GB (24x 32GB 4Rx4 PC3-14900L DDR3-1866 ECC REG LRDIMM) SSD: 12x 1.6TB HP G8-G10 1.6-TB 2.5 SAS 12G WI SSD HDD 1: 45x Mixed 8TB 7.2K SAS and SATA HDDs HDD 2: 12x Seagate Exos X18 16TB SATA HDDs HDD 3: 8x Seagate 1TB 6G 7.2K 2.5 SAS HDDs HDD 4: 12x Seagate Exos 7E8 8TB 7.2K SAS HDDs UPS: Worthless TrippLite POS that shall not be named Rack: APC NetShelter 42U Network Stack: Ubiquiti Unifi - UDM-Pro, US-48-G1, USW-Pro-Aggregation, plus a bunch of U6-LR Unifi APs...also a bunch of Meraki 8-port PoE switches. Software and Configuration: OS: CentOS 7 File system: ZFS - RAIDZ Configuration: RAIDZ-1 (7x 6-8TB drive vdevs, 2x 6-16TB drive vdevs; for a total of 9 vdevs) File system capacity: 479T Usage: Plex. I have a lot of stuff. 3000+ movies, 1100+ TV shows with 73,000+ episodes. I (still) share this with close friends and family so uptime and performance is very important. The S/O approval factor means downtime/unexpected maintenance is unacceptable. The 4K stuff is starting to dwarf the 1080p content. I am actually in the process of converting all of my 1080p content into h.265 HEVC. I know the process isn't lossless, but that's what I have the remux 4K stuff for. Everything else can get converted to save space. And bandwidth. I had 1080p stuff that would buffer because the bitrate was unnecessarily high. HEVC fixes that. I've saved over 100TB by converting so far. Kinda insane. Backup: YOLO (I do snapshots but lmao lets be serious I am not backing this shit up) Story time: So, a lot has changed in the last 5 years. Still amazing that it's been 5 years since I made that post. I bought my first house shortly after I made that post. Moved in December of 2017. That's when I got my big boy APC cabinet. I also got 600+lbs worth of APC 30A 120V UPS. APC Smart-UPS XL 3000VA RM 3U 120V if memory serves. With two expansion units. I had the whole house wired for ethernet, APs on each floor...it was great. PoE fed from the server cabinet meant that even in the event of a power outage, I'd still have blazing fast Wi-Fi inside, outside, in the yard, everywhere. That UPS could keep it all powered for like 11 hours. Nucking futs. I moved Plex into a Chenbro 48-bay chassis, and we were off to the races. The system outgrew that after about 3 years. That's when I got an HGST 4U60 JBOD shelf. That thing is a fucking tank. I'm keeping it forever. But, it required 240V power. So, a new breaker in the electrical panel literally 1.5 feet from the cabinet and a new 240V UPS later, it was all up and running! Moved from Unraid to ZFS, and kept expanding. Got up to 52 drives in the HGST JBOD, then it was time to move again! Also, don't ask me what my power bill was. I never looked. Pretty sure I'd have a heart attack if I did. That brings us to 3 months ago. New place identified, ready to move, but shit, there's literally no way to get 240V power economically to where I need it. SOLUTION! I scored some free NetApp DS4246 shelves from work! They're SAS 6Gb/s instead of the 12Gb/s of the HGST JBOD, but it's not like they're full of SSDs, so who cares!! Moved all the drives in to the NetApp shelves, and we were off! They gobbled up the 120V power with ease! Also significantly quieter. And somehow puts out less heat. Not sure how that's possible, but it is. So, a lot has changed. Lots of hardware has been disposed of, lots upgraded, lots added, etc. This is my hobby and passion so I enjoy the hell out of it. Latest upgrade was going to 2x Intel Xeon E5-2697 v2 CPUs, 768GB of RAM, and a new LSI 9206-16e HBA. The HBA was a bad move. I need to swap it out. I wanted a card that had four SFF8644 connectors so I could single path each DS directly to the HBA (I know they stack, but if I've got the ports, why not?). That works great. The throughput numbers are insane. And since I've mixed SAS and SATA drives multipathing won't get me anywhere. The problem is the temperature. Holy shit. I don't know how LSI/Avago/Broadcom or whomever got this thing by the regulators. It's a fire hazard. The first time I turned the server on it kernel panic'd and shut down because of how hot it got. Last reported temperature I saw was 131C. I cranked the R720xd fans up and moved it to a different slot with more airflow which has it down to a more reasonable 75C but still, that's too hot and the fans are too loud now. So, back to the drawing board... Photos: Getting the rack installed at the new place in December/January 2017 - Adding some new stuff... More upgrades! This thing was heavy... Getting the gear spun up in the new house.... Thanks for coming to my TED talk
  2. so it looks like my Synology won't qualify here, but that's fine...it's only 32TB instead, here's my unRAID system that I use to house my modest Plex system I'm not providing my case/psu/mobo as I honestly don't remember what they are and I'm too lazy to look Hardware: CPU: AMD Ryzen 5 1600 HS: Stock cooler RAM: 16GB of <insert brand> SSD: 1 x 525GB Crucial MX300 HDD 1: 5 x 8TB WD80EFZX UPS: 2x APC SmartUPS 3000XL SU3000RMXL3U (not pictured) Rack: Startech 12U Network Stack: Cisco Meraki MS225-24 switch, MX84 firewall Software and Configuration: OS: unRAID RAID config: unRAID High Water File system: btrfs File system capacity: 32TB presented, 40TB raw (add'l 22TB presented via NFS from my Synology DS1515 w/4x 8TB WD80EFZX) Usage: Plex. I have a lot of stuff. 724 movies, 233 TV shows with 18,224 episodes. I share this with close friends and family so uptime and performance is very important. I have at least 6 concurrent transcodes going at any point in time, most 1080P with some 4K. It's essentially full at this point so I'm looking at replacing it with a Norco RPC-4224 and consolidating all of my storage into a single chassis. Going to get rid of the R710/MD1200 and the Xserve/Xserve RAID - they're power hogs and are not expandable any further Backup: I back up my most important media to my grandfathered Google Apps account. Photos: The system in question is the top one. Couldn't help but show off the rest, though. Might do other posts for them later. Yes, I know it's dusty as shit. I'm in the process of moving and all this will be torn apart, cleaned piece by piece and reassembled with dust filtration in the new rack.
  3. I'm gonna post mine here, but I was curious what the rules were for mounted storage (via NFS)? I distribute my storage across two systems then present it logically to the docker apps as a single storage pool.
×