Jump to content

The Dark Arts of eBay Networking Gear

FaultyWarrior

***This post is work-in-progress, isn't 100% complete, and is surely littered with spelling and grammatical errors.  I'll be editing it as I put it together.  Feel free to contribute any info you see fit!***

 

I was originally planning this as a page on my website, but after Linus's video on the cheap 10Gb gear last night (here if you've not seen it), I figured I might as well just post it to the LTT forums instead, since I'm sure there are some other network aficionados who'd benefit from it.  I'm also working on a video series on all of this for those who prefer that method of content consumption.  My channel is here, and the playlist of this stuff is here.

 

As the title suggests, this thread is to diverge into the (apparent) dark arts of second-hand, but still very usable high-end networking gear that can be had for very little on eBay, along my trials and tribulations as I move my network from an aging CAT5e infrastructure to multi-mode fibre & CAT 7.  It's broken down by item type to make it a bit easier to navigate. {note - if people have an interest in me expanding this to other formally very expensive, but now worthless enterprise hardware like older SAN gear, let me know...I've got plenty to go off there!}


Setup & Config:
My network has 5 nodes - my daily driver Mac Pro workstation, my Sun Microsystem Ultra 40 M2 hardware development workstation, my PowerMac G5 "Quad" SAN filer; my Intel OEM-built backup server; and my Dell PowerEdge 2850 pfSense router.  Eventually my parent's Mac Mini, along with my XServe G5 NAS will gain NICs and be added on here as well. (they'll both just have to limp along on gigabit for now)

 

I have the 4 copper gigabit ports of the 10Gb switch in an LACP group and connected to the first 4 ports on a 48-port Netgear copper gigabit switch, so I have plenty of gigabit ports to work with as well.

 

All devices had Jumbo Frames enabled and set to 9000.  No other settings were tweaked since I don't fully understand what commands do what just yet (mostly CLI based with piss-poor documentation)

If you're looking for a really in-dept look at my network layout and all of the components, see my YT channel for videos...far too much stuff to try and stuff into a post.


Below are my first real-world testing with my hardware.  Do note that all of this testing is done using RAM drives, since outside of my 56-drive SAN pool; nothing else I own could keep up with 10 Gigabit.  If you plan to use SoftPerfect's tool that Linus recommended, you'll have to grab the older version off MajorGeeks since they changed the licensing to paid on the latest version (lame!)

 

So - here are the results:

NFS transfers from my Mac Pro workstation to my Sun Ultra 40 M2 workstation (Booted into CentOS 7) was a near-perfect pegging of the connection.
AFP transfers from my Mac Pro workstation to my SAN also pegged the connection

 

Now for the downer.  When trying SMB shares, I saw the same problems Linus saw.  From my Mac Pro to my Windows 10-based Backup Server, the transfter sputtered out at around 300MB/s.  Booting the same server into a live Linux distro and using NFS pegged the connection, so again proving that SMB is just not designed by default to handle this kind of bandwith.  While this isn't a big deal for me, since my backup server is the only Windows box on the network (Because BackBlaze nor DropBox have Linux clients!), it'll just mean that if I were to ever have to restore data from this machine that it wouldn't go nearly as fast as it could.

 

As for Internet performance, this setup obviously didn't make a difference.  My mediocre 150/20 connection isn't going to magically get faster by upgrading my network.  I put that here because this seems to be something people are getting mixed up with Linus's videos.  LAN speed and WAN speed are VERY different things.  The exception is for those of us lucky enough to live in an area where either FTTH or MetroE is available.  Upgrading to 10Gb internally could make a difference, although the case for that is VERY limited.  Unless you have Comcast's "Gigabit Pro" 2Gb/s residential MetroE service (FWIW, I'll have it by the end of the year, so I'll be able to give feedback on that soon!), or you're able to shell out thousands of dollars a month for an enterprise MetroE connection from any of the major carriers, anything faster than gigabit won't help your internet - even Google Fiber tops out at 1Gb.  On a quick side-note - based on my readings, for those with Gigabit Pro, you actually get a 10Gb link - the Juniper access router they provide throttles you to 2Gb/s; so in the future they'll be able to provision higher speeds.

 

Now for the actual hardware I'm running:

 

Let's start with the switch, since it's really the first step.  While you *CAN* forgo this and do what Linus did initially - daisy-chaining machines to each-other with dual-port NICs (see video here); with switches this cheap, there isn't much of a point in this anymore.

 

My experience is with the Quanta LB6M switch that Linus showed in his video yesterday.  I bought one for my day job about 6 months ago.  We toyed with it for a few hours, then shelved it since we needed to wait until we got our 2017 budget to make the building infrastructure changes to support it (bleh).  After my initial experiences with the one at work, when it came time to start revamping my home network, I picked up the same switch for myself about 2 weeks ago.  The switch itself is pretty awesome.  It's got TONS of options, however the documentation is abominable.  There's a few random forum threads and the manual for a different model which uses the same CLI, although it's very much an "enter command and hope it works" kind of thing.  Once you get past this and start figuring things out, it's very good...perhaps not on the level of Cisco or Juniper, but a pretty damn close second IMO.

 

Now for the elephant in the room - the interconnect part of this.

 

Everyone seems to give SFP+ stuff a bad rap, saying it's hard to work with and expensive.  It's a mixed bag IMO.  What ever you do, avoid direct-attached copper cables at all costs.  THOSE are expensive and a bitch to work with due to their physical thickness and large connectors.  For old hands at this kind of stuff, it's only challenger for "impossible to work with" wiring that can match direct-attach copper cabling is 10BASE5 thicknet.  The alternative (and preferred medium) is fibre optics.  On the NIC side, it's WAY cheaper than 10 Gigabit RJ45 stuff, and while not as easy to pull through a building and/or terminate as CAT6A or CAT7;  there is viable workaround to that issue.  I bought pre-made "patch cords" of roughly the correct length, and then used LC keystone jacks on both sides.  While this could result in situations where you have a random bundle of extra cable you have to hide somewhere, if you can't swing the cost to buy a fibre termination kit or to call in a professional to do it (in which case you can probably afford to just skip this thread entirely and get RJ45 stuff); it's a very viable solution for a home environment.

 

To tag off that, pre-made fiber cables are dirt-cheap - about $1.30 (USD) per meter, give or take a few cents.  For the dozen 3 meter (a hair shy of 10 feet) cables I bought, the total came to ~$46.  For reference, MonoPrice charges ~$1.50 per meter for pre-made CAT6A cables, so price-wise you're not doing too bad.


SFP+ optical transceivers are equally cheap - $10-$20 (USD) on average, including shipping.  Mine are a mix of Myricom and JDSU units.  All of them are short-reach 850nm units, since even my in-wall runs are at-most 30 meters end-to-end. (SR is rated for 300 meters).


Now the NICs.  Mine are older Myricom units.  Specifically, 10G-PCIE-8B-S.  They're readily available for $60-$100 on eBay.  I chose these for not only their cost, but for their EXCELLENT driver support.  They've got drivers available for pretty much any OS from the last decade, and if they don't have what you need, you can just grab the source code package and compile the drivers yourself.  (This will come in handy when trying to get the card in my Sun workstation working under Solaris.)

 

I haven't tried the cards under Windows XP, however I've found presentation slides that suggest it works.  Once I get XP loaded on the Sun workstation I'll give it a shot and post my results; although outside of fringe cases like mine where I use XP for its compatibility when doing hardware development and debugging, I don't think anyone else will be realisticlly still running it at this point; especially in an instance where they'll want a 10 Gigabit connection to said machine.

"VictoryGin"

Case: Define R6-S, Black | PSU: Corsair RM1000x, Custom CableMod Cables | Mobo: Asus ROG Zenith Extreme X399 | CPU: Threadripper 1950X | Cooling: Enermax Liqtech TR4 360mm AIO with Noctua NF-F12's | RAM: 64GB (8x8GB) G.SKILL Flare X DDR4-2400 | GPUs: AMD Radeon Vega Frontier Edition; NVIDIA Quadro P400 | Storage: 3x Intel 760p 128GB NVMe RAID-0 | I/O Cards: Blackmagic Intensity Pro, Mellanox ConnectX-3 Pro EN 40GbE NIC, StarTech FireWire, StarTech Serial/Parallel  | Monitors: 3x 30” First Semi F301GD @ 2560x1600, 60Hz; 50" Avera 49EQX20 @ 3840x2160, 60 Hz | Keyboard: Black/Gray Unicomp Classic | Mouse: Logitech MX Master 2S | OS: Windows 10 Enterprise LTSB 2016

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×