Jump to content

cookiesowns

Member
  • Posts

    25
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Gender
    Male
  • Location
    Orange County, CCA
  • Occupation
    Software Engineer + IT

System

  • CPU
    Intel 6950X @ 4.4 @ 1.328V
  • Motherboard
    Rampage V Edition 10
  • RAM
    64GB Trident Z @ 3200 C14
  • GPU
    SLI GTX Titan X (pascal ) @ 2Ghz
  • Case
    Thermaltake Core X9
  • Storage
    400GB SSD 750 NVMe + 3x RAID-0 512GB Crucial M4
  • PSU
    Corsair AX1200 ( OG Gold )
  • Display(s)
    Acer Predator 1440P 165Hz + 2x Dell U2713
  • Cooling
    Custom Watercooling with 3x EK Rads + EK Blocks
  • Keyboard
    Massdrop whitefox 67G zealios purples
  • Mouse
    Logitech G900
  • Sound
    Schiit Bifrost Uber + Magni 2 Uber stack
  • Operating System
    Windows 10 Anni

Recent Profile Visitors

595 profile views
  1. I love Cumulus because of management. We have more sysadmins / devops people than Network Engineers. My personal background is in devops. Performance is the same. Cost was drastically different at the time.
  2. Made some slow but steady progress over the last few days. Been fighting some remaining hardware struggles. Had a defective node on the FAT TWIN chassis that had a crispy backplane. Some NICs that were advertised as X520-DA2s but ended up being an older gen based off the 82598EB chipset that has no more driver support & does not support 10ge-LR SFP+ optics, only DA copper. Working on getting those replaced.... Crispy Backplane.. Can you spot the burnt fuse? Here's some pics of Rack 2 close to being full loaded. The remaining nodes not shown are two E3-1270v2 nodes that will be housed in a full-length supermicro chassis with redundant PSU's. These will be for our PFSense routing needs in a HA cluster. EDIT: Just realized I didn't take a photo with the remaining 4 other E3-1240v2 nodes, so just imagine those sitting on top of the lone E3 node. The 4x 2U 12 bay chassis's will be based on E5 v1 nodes with 64GB-128GB of RAM. These will strictly be used as Ceph nodes in our proxmox cluster. While they should have enough horsepower to run VM's on it, it won't be necessary given the horsepower we have on the FAT TWIN chassis. These Ceph storage nodes will have 8x HGST SSD400M SAS ssd's in them, along with 2x S3710 400GB Intel SSD's for boot in ZFS RAID 1
  3. Not really avoidable. Some proper bedding will help quite a bit though. I'd recommend some Winmax pads, especially if you track. They will squeal + squeak a bit, but no where near as bad as HP+, plus the W3 compound should outperform HP+. I personally run W3's on the BRZ. 6+ track days and still no problems + 15K miles, granted I am on stock Prius tires. Slowly starting to get fast enough, so I'll probably be upgrading to W4's once I get stickier tires. They barely make noise when properly bedded in.
  4. Thanks, tried to get cabling as clean as I can. The wide-racks really help with that. There's still quite a bit of clean up I can do on the cable tray and internally but since it's an on-going project I think it's a good start no? Heh. I'm a fan of ZFS. We'll probably try ZoL in a multi-node configuration with Fencing for True HA on ZFS. Hence why we went with deep racks so eventually we can transition to the new 70-90 drive 4U JBOD chassis. Nexus 9K's are great.
  5. Preface: This is not a personal build, but rather a build I'm leading for the company I work at. We're a large photography studio here down in Southern California, and we also operate one of the industry leading photography educational sites. I'm sure ya'll can guess who we are! Previous build log I've neglected a couple years ago on OTT: I'm treating this build as if it were my own, and would love to share back with the LTT community as quite a bit of knowledge was learned from Linus's videos regarding network rendering and adobe premiere. We're actually not too far from each other in the way we build our machines, except LTT does get the HW hookups, we don't hah! This will be an on-going project, and I will try and document what I can without divulging too much about our internal configuration, just you know, legal reasons. Phase 1 that will be shared in this post is where everything currently stands, this was doing over the course of a few days to move from our old location to the new location, there's still plenty more to come! Rack 1 Networking gear: 1st switch is an Edge-Core AS4610-52T running Cumulus Linux. This is a 48 port PoE+ switch with 4xSFP+ 10G, and 2x QSFP+ 20G stacking ports, unfortunately the stacking ports are not supported on Cumulus. This switch will be uplinked via 4x10G with VLAN trunks to a core 10/40g switch in Rack 2 shown later. 2nd switch is a Juniper EX3300-48T pulled from our old office, just needed additional 1G ports, as really we'll only have 8-10 devices on PoE+ ( Aruba AP's and PoE+ IPTV cameras ) 3rd switch is Cisco 3560, this switch is for management traffic on Rack 1 ( IPMI, PDU/UPS, back-end management of switches etc ) 4th switch is a Netgear XS712T that I quite hate. But kept so we don't have to add new network cards on 2 of our existing ZFS units as they have 2x10Gbase-T onboard. Below that is a pfsense router. This router will be replaced eventually with a redundant 2 node pfsense cluster running E3-1270V2's with 10G networking for LAN portion, and 2x1G switch to the EX4200-VC cluster on Rack2 for internet uplink the 4 4U servers below are Supermicro 24bay units running ZFS. Each with 128GB of RAM. One of the servers is running Dual ported 12G SAS 8TB Seagate drives with a 400GB NVMe drive as L2arc. Total raw storage ~ 504TB in raw spinny drives Rack #2 Disclaimer: I do not recommend storing liquids on top of IT equipment regardless if they are powered on or not, in hindsight I probably shouldn't have included this picture, but it's already on reddit and I took the flaming.. Lesson learned. 32x 32GB Hynix DDR4 2400 ECC REG dimms for the VM hosts + 8x16GB DDR3 1866 dimms for upgrade of old ZFS nodes ( 64GB to 128GB ) Total ram 1024GB of DDR4 + 128GB of DDR3. It's crazy how dense DRAM is now.. 8x QSFP+ SR4 40GbE optics. Initially was going to go 2x40GbE to the ZFS nodes but ended up running out of 10GbE ports. So 1x 40GbE per ZFS node ( 2x total ) and the rest of the 4x40GbE ports will be split out to 10GbE over PLR40GbE long range 4x10GbE optics to end devices. Top switch: Edge-Core AS5712 running Cumulus Linux again. 48x10GbE SFP+ and 6xQSFP+ 40GbE. Loaded with singlemode FiberStore optics ( not pictured ) Sitting above it is a 72strand MTP trunk cable going to our workstation/postproducer area. Still awaiting the fiber enclosures and LGX cassetes so we can break this out into regular LC/LC SMF patch cords to the switch Below that are two EX4200-48T's sitting in a Virtual Chassis cluster. These will be used primarily for devices that should have a redundant uplink, so I'm spanning LACP across the two "line-cards", this will also be used in LACP for our VM host nodes for Proxmox cluster/networking traffic. Ceph/Storage traffic will be bonded 10GbE for nodes that need it. Again a wild Cisco 3560 10/100 switch with 2x 1GbE LACP uplinks to the EX4200 for management traffic. Rack #2 will eventually be filled with 4x 2U 12bay Supermicro units loaded with SSD's as Ceph nodes, and then some E3 hosts for smaller VM guests. Below that you can spy a Supermicro Fattwin 4U chassis with 4x Dual Xeon V3 nodes. Each node will have 256GB of RAM, and depending on the node they will either have dual E5-2676 V3 or E5-2683 V3 CPUs. There will be minimal if not 0 local storage as all VM storage will live on the Ceph infrastructure. 4x10GbE everywhere. Random photos of our IT room 4x Eaton 9PX 6KvA UPS's, 2 per rack. Fed with 208V AC PDU's are Eaton's G3 managed EMA107-10
  6. Unfortunately I won't be running NFS. It'll be mainly SMB/CIFS and one iSCSI share on a mirrored vddv. Sure, I'll let you know my findings on 10Gb. I've decided to not bother with server 2012 now as well, so sorry guys, no results for ya!
  7. Some photos. Didn't grab too many photos of the first server as had a deadline to meet. But I should be taking more detailed shots of the second servers build process. Tray processors straight from Intel's Costa Rica 4x Intel S3500 120GB SSD's Typical supermicro packaging. X9SRH-7TF This is new... Supermicro now comes with a color quick start guide. Very helpful! Useless generic red sata cables. Anyone want these? I still have like 50 of them left over from previous super micro installations The actual board! Latest BIOS & IPMI firmware but the LSI firmware was on 13 IR, which I think is pretty old. Flashed to R16 IT in prep for FreeNAS. Supermicro did also release a R19 build, so I might try flashing that too. Hynix 16GB 1866 ECC REG dims that are on the Supermicro QVL The LSI HBA's! Supermicro Avoten Superserver barebones. I only needed to install ram and that was it! Also nice to see how super micro does the wiring too. Here's how the rack is populated in the meantime, and one of the 4U servers built. I've ran a windows server 2012 installation, and created a pool with all 24 drives. Speeds were interesting. 1.2GB/s read/write sequential, and random performance was basically 1-2 drives at most. Will try some more server 2012 testing before breaking it down to run bad blocks for 1-2 passes. Memtest on both servers passed with flying colors, literally.. since the EFI memtest is colorful
  8. Sorry for the shortage of updates guys. CPU & H110 came yesterday, so now the PC is built and ready. Mac OC was 4.7Ghz, Max "Stable" Oc was 4.6Ghz. Will be running 4.5Ghz daily @ 1.25V.
  9. Small updates today: APC Netshelter SX42U rack came in. Apparently dell now rebrands / sells these. PC Goodies came in too! Still missing H110 & the processor Will do. From what I can tell, I don't think they will get "too" hot, at least not anywhere near LSI2208 based raid cards get. I have a feeling they will run toastier than your regular card though, as the super micro branded X540-T2's have an active fan on it.
  10. It's a general rule of thumb, but does not need to be followed to the dotted i. 64GB of RAM should be enough for what I'm using it for.
  11. That's just what consumer drives are rated for. Samsung EVO's have TLC nand which is rated around 1K P/E cycles, but they generally live at least 3K P/E Plus, you'll need to be bashing a drive significantly to get even close to reaching the 70TB warranty counter within 2-3 years.
  12. If uninstalling the drivers resolve the boot hang issue, I would maybe look at downgrading the firmware to 1 release prior as well as the driver.
  13. Maybe if you lived 5 years ago. SSD's are perfectly fine, and unless you wear out the drive significantly, data retention will be just as good as a regular spinning disk if not better. Consumer drives are rated at 1YR data retention after wearing out all of the cells.
  14. Hey Guys! Table of contents: July 6th. 2014 - Original post with pictures July 9th. 2014 - Picture updates of pc components, APC Rack & more! Anywho: New to LTT forums, but I've been watching Linus's videos for quite some time now. I've been working in IT for awhile, and just recently I get to implement an IT upgrade for work. Our company does quite a bit of digital media, such as photography, videos, and educational content. Currently we operate on a few synologies as well as some off-site backup routines, but we are growing far quicker than the synologies can handle, and we needed a 10G upgrade anyways. So here we go: ZFS #1 CPU: Intel Xeon E5-1620V2 RAM: 4x16GB ECC 1866 REG HDD: 24x 4TB Hitachi NAS drives in 4x 6 drive Z2 MB: Supermicro X9SRH-7TF Chassis: Supermicro SC846BA-R920B HBA: LSI 9207-8i x2 + onboard LSI 2308 SSD: 2x Intel S3500 120GB for boot + l2arc/ZIL testing OS: Maybe FreeNAS, undecided yet ZFS #2 ( offsite backup server ) CPU: Intel Xeon E5-1620V2 RAM: 4x16GB ECC 1866 REG HDD: 24x 4TB Hitachi NAS drives in 2x 11 drive Z3 MB: Supermicro X9SRH-7TF Chassis: Supermicro SC846BA-R920B HBA: LSI 9207-8i x2 + onboard LSI 2308 SSD: 2x Intel S3500 120GB for VM's OS: Promox w/ ZoL or FreeNAS Networking: master workstations will have X540-T2 NICS Pfsense on a 5018A-MHN4 box which has an Intel Avoton C2758 Juniper EX3300-48T as "core" Netgear XS712T as 10G distribution with 2x 10G SFP+ uplinks to the juniper switch Rack: 2 cyberpower smart app 2150VA UPS units APC AR3100 42U standard rack Lightroom Build: Doing another editing machine build which will most likely end up being the benchmark rig, if not, we will use the 4930K rig below Intel 4790K G.SKILL 1866 C8 2x8GB Intel 730 240GB Asus Maximus VII Hero Nvidia GTX 660 AX760 Corsair 450D Benchmark rig: Intel 4930K @ 4.4 Asus X79-Deluxe 32GB G.Skill 1866 C8 4x8GB 1x Intel X540-T2 1x LSI 9207-4i4e <- This is for initial testing of drives + testing out an HP 6250 LTO-6 tape drive. Pictures to come as parts come in as I've created this thread to allow easier posting from work! Any questions? Picture/benchmark requests? Throw them in the thread! -Update 7.3.2014 - More stuff came in! Phone pics in the meantime: Intel X540-T2 Copper NIC OM3 LC/LC Fiber patch cord. Got them from mono price, and they seem to use corning fiber which is great 4x SFP+ 10G optics from fiberstore 2 Netgear coded optics, and 2 juniper coded optics. Curious to see how they fare. They were $25 each for 10G-SR optics. Juniper optics would be around $700-$1500 depending on where you get it. HUGE difference! If you have a jcare contract, then yes you would want to go with Juniper original optics, otherwise, just buy some spares. 52 Hitachi Deskstar NAS drives were ordered, here you see about 32 drives.
×