Jump to content

Epic IT overhaul for Media production company. LOTS O STORAGE & CPUs

Preface:

This is not a personal build, but rather a build I'm leading for the company I work at. We're a large photography studio here down in Southern California, and we also operate one of the industry leading photography educational sites. I'm sure ya'll can guess who we are!

 

Previous build log I've neglected a couple years ago on OTT: 

 

 

I'm treating this build as if it were my own, and would love to share back with the LTT community as quite a bit of knowledge was learned from Linus's videos regarding network rendering and adobe premiere. We're actually not too far from each other in the way we build our machines, except LTT does get the HW hookups, we don't hah!

 

This will be an on-going project, and I will try and document what I can without divulging too much about our internal configuration, just you know, legal reasons.

 

Phase 1 that will be shared in this post is where everything currently stands, this was doing over the course of a few days to move from our old location to the new location, there's still plenty more to come!

 

Rack 1 Networking gear:

5HZBXMG.jpg

 

sEIf9By.jpg

1st switch is an Edge-Core AS4610-52T running Cumulus Linux. This is a 48 port PoE+ switch with 4xSFP+ 10G, and 2x QSFP+ 20G stacking ports, unfortunately the stacking ports are not supported on Cumulus. This switch will be uplinked via 4x10G with VLAN trunks to a core 10/40g switch in Rack 2 shown later.
 

2nd switch is a Juniper EX3300-48T pulled from our old office, just needed additional 1G ports, as really we'll only have 8-10 devices on PoE+ ( Aruba AP's and PoE+ IPTV cameras )

 

3rd switch is Cisco 3560, this switch is for management traffic on Rack 1 ( IPMI, PDU/UPS, back-end management of switches etc )

 

4th switch is a Netgear XS712T that I quite hate. But kept so we don't have to add new network cards on 2 of our existing ZFS units as they have 2x10Gbase-T onboard.

 

Below that is a pfsense router. This router will be replaced eventually with a redundant 2 node pfsense cluster running E3-1270V2's with 10G networking for LAN portion, and 2x1G switch to the EX4200-VC cluster on Rack2 for internet uplink

 

the 4 4U servers below are Supermicro 24bay units running ZFS. Each with 128GB of RAM. One of the servers is running Dual ported 12G SAS 8TB Seagate drives with a 400GB NVMe drive as L2arc. Total raw storage ~ 504TB in raw spinny drives

 

Rack #2

jhbTlPv.jpg

 

Disclaimer: I do not recommend storing liquids on top of IT equipment regardless if they are powered on or not, in hindsight I probably shouldn't have included this picture, but it's already on reddit and I took the flaming.. Lesson learned.

HEnVAVr.jpg

 

CWnnLbe.jpg

32x 32GB Hynix DDR4 2400 ECC REG dimms for the VM hosts + 8x16GB DDR3 1866 dimms for upgrade of old ZFS nodes ( 64GB to 128GB ) Total ram 1024GB of DDR4 + 128GB of DDR3. It's crazy how dense DRAM is now..

 

0lTHXMx.jpg

8x QSFP+ SR4 40GbE optics. Initially was going to go 2x40GbE to the ZFS nodes but ended up running out of 10GbE ports. So 1x 40GbE per ZFS node ( 2x total ) and the rest of the 4x40GbE ports will be split out to 10GbE over PLR40GbE long range 4x10GbE optics to end devices.

 

Top switch: Edge-Core AS5712 running Cumulus Linux again. 48x10GbE SFP+ and 6xQSFP+ 40GbE. Loaded with singlemode FiberStore optics ( not pictured ) Sitting above it is a 72strand MTP trunk cable going to our workstation/postproducer area. Still awaiting the fiber enclosures and LGX cassetes so we can break this out into regular LC/LC SMF patch cords to the switch

 

Below that are two EX4200-48T's sitting in a Virtual Chassis cluster. These will be used primarily for devices that should have a redundant uplink, so I'm spanning LACP across the two "line-cards", this will also be used in LACP for our VM host nodes for Proxmox cluster/networking traffic. Ceph/Storage traffic will be bonded 10GbE for nodes that need it.

 

Again a wild Cisco 3560 10/100 switch with 2x 1GbE LACP uplinks to the EX4200 for management traffic.

 

Rack #2 will eventually be filled with 4x 2U 12bay Supermicro units loaded with SSD's as Ceph nodes, and then some E3 hosts for smaller VM guests. Below that you can spy a Supermicro Fattwin 4U chassis with 4x Dual Xeon V3 nodes. Each node will have 256GB of RAM, and depending on the node they will either have dual E5-2676 V3 or E5-2683 V3 CPUs. There will be minimal if not 0 local storage as all VM storage will live on the Ceph infrastructure. 4x10GbE everywhere.

 

Random photos of our IT room

 

E2lfnpM.jpg

 

837fah9.jpg

 

4x Eaton 9PX 6KvA UPS's, 2 per rack. Fed with 208V AC PDU's are Eaton's G3 managed EMA107-10

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Needs moar storage!

Toss in a couple NetApp Storage Arrays, some Nexus 9Ks (9332s or a 9504 with 40Gb linecards) and zoom zoom zoom, lol

 

Looks nice and organized, I like it :)

If people gave you crap for putting a closed bottle on top of a couple switches they would lose their minds if they ever saw some of the things I've seen.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Lurick said:

Needs moar storage!

Toss in a couple NetApp Storage Arrays, some Nexus 9Ks (9332s or a 9504 with 40Gb linecards) and zoom zoom zoom, lol

 

Looks nice and organized, I like it :)

If people gave you crap for putting a closed bottle on top of a couple switches they would lose their minds if they ever saw some of the things I've seen.

Thanks, tried to get cabling as clean as I can. The wide-racks really help with that. There's still quite a bit of clean up I can do on the cable tray and internally but since it's an on-going project I think it's a good start no?

 

Heh. I'm a fan of ZFS. We'll probably try ZoL in a multi-node configuration with Fencing for True HA on ZFS. Hence why we went with deep racks so eventually we can transition to the new 70-90 drive 4U JBOD chassis.

 

Nexus 9K's are great.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, cookiesowns said:

Thanks, tried to get cabling as clean as I can. The wide-racks really help with that. There's still quite a bit of clean up I can do on the cable tray and internally but since it's an on-going project I think it's a good start no?

 

Heh. I'm a fan of ZFS. We'll probably try ZoL in a multi-node configuration with Fencing for True HA on ZFS. Hence why we wen't with deep racks so eventually we can transition to the new 70-90 drive 4U JBOD chassis.

 

Nexus 9K's are great.

Very good start, I try to keep "my" lab as clean as I can but since we're always swapping equipment in and out it gets tough after a while. Static stuff is nice since you can make it right the first time and it tends to stay that way. I'd love to see those racks fully loaded with storage and whatnot, it's one of my soft spots, lol.

Really enjoy the Nexus products, the Gen 2 N9K stuff is really awesome and we just got some not too long ago that I've been playing with it for a couple weeks now. Had some growing pains for a while but things have gotten better.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Made some slow but steady progress over the last few days. Been fighting some remaining hardware struggles. Had a defective node on the FAT TWIN chassis that had a crispy backplane. Some NICs that were advertised as X520-DA2s but ended up being an older gen based off the 82598EB chipset that has no more driver support & does not support 10ge-LR SFP+ optics, only DA copper. Working on getting those replaced....

 

Crispy Backplane.. Can you spot the burnt fuse?

bY16mP7.jpg

 

Here's some pics of Rack 2 close to being full loaded. The remaining nodes not shown are two E3-1270v2 nodes that will be housed in a full-length supermicro chassis with redundant PSU's. These will be for our PFSense routing needs in a HA cluster.

EDIT: Just realized I didn't take a photo with the remaining 4 other E3-1240v2 nodes, so just imagine those sitting on top of the lone E3 node.

 

The 4x 2U 12 bay chassis's will be based on E5 v1 nodes with 64GB-128GB of RAM. These will strictly be used as Ceph nodes in our proxmox cluster. While they should have enough horsepower to run VM's on it, it won't be necessary given the horsepower we have on the FAT TWIN chassis. These Ceph storage nodes will have 8x HGST SSD400M SAS ssd's in them, along with 2x S3710 400GB Intel SSD's for boot in ZFS RAID 1

 

 

M9g4a1P.jpg

 

hWpWmsw.jpg

 

Y0BuFFA.jpg

 

YzvjF0a.png

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 weeks later...

Very very nice, just not a big fan of pfsense. Why the choice for the Edge-core switches running cumulus. It just seems that a company like yours would sooner get something from Cisco/Juniper/Arista/Dell instead of  a whitebox/whitelabel switch. 

If you tell a big enough lie and tell it frequently enough it will be believed.

-Adolf Hitler 

Link to comment
Share on other sites

Link to post
Share on other sites

now thats a server room!

i'm going to improve the server room at my work

****SORRY FOR MY ENGLISH IT'S REALLY TERRIBLE*****

Been married to my wife for 3 years now! Yay!

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/18/2016 at 2:18 PM, legopc said:

Very very nice, just not a big fan of pfsense. Why the choice for the Edge-core switches running cumulus. It just seems that a company like yours would sooner get something from Cisco/Juniper/Arista/Dell instead of  a whitebox/whitelabel switch. 

I love Cumulus because of management. We have more sysadmins / devops people than Network Engineers. My personal background is in devops. 

 

Performance is the same. Cost was drastically different at the time.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×