Jump to content

FaultyWarrior

Member
  • Posts

    40
  • Joined

  • Last visited

Reputation Activity

  1. Like
    FaultyWarrior got a reaction from CableMod in Anyone Have Issues With CableMod Recently?   
    OK.  I heard back from CableMod, and they confirmed that my order did indeed ship.  So this lies at DHL's feet.
     
    If I don't see the tracking move by Tuesday, I'll call them and chew them out...there's no excuse for them to need more than TWO WEEKS just to acknowledge that they've taken possession of a package; especially when their site claims "normal delivery is usually within 2-3 days".
     
    EDIT:
    Woke up this morning to a text from DHL...seems something kicked their butts into gear...package processed and will be delivered Tuesday.
  2. Funny
    FaultyWarrior got a reaction from Tristerin in Anyone Have Issues With CableMod Recently?   
    Yes - THIS.  I ragequit like hell when I saw that my NAS got shipped via FedEx, yet NewEgg shipped the SSDs (FOR THE DAMN NAS!) via SmartPost or whatever...I've had the NAS for well over a week, and the stupid SSDs are just going out for delivery today!  Even worse-so, I ordered an SSD for my parents to replace a dying one for them over the holidays (yay for Chrismas-day IT work?!) - the got their drive within 3 days, and I used the free shipping option....
  3. Like
    FaultyWarrior reacted to Tristerin in Anyone Have Issues With CableMod Recently?   
    I have used them once no issues - My experience with DHL however - ISSUES!  Slowest shipping company in the world.
     
    I have had 2 builds held up by DHL.  Ordered all the 'ish from Newegg...Newegg places it all one Fed Ex sans the SSD every gosh darned time.  No they put the SSD on DHL.  So usually 3-5 days after I order the parts I stair at a pile of goodness waiting for the SSD to arrive...usually took a week or more.
     
    Bought an SSD on Black Friday...they shipped it DHL...everything I got Black Friday from Newegg was on my doorstep by Tuesday the next week.  Got the SSD Monday this week.
  4. Like
    FaultyWarrior got a reaction from simson0606 in Asus WS C621E SAGE   
    Well then...guess I found the board I'll use in my next workstation once my EVGA Classified SuperRecord 2 dies or I need more power.
  5. Agree
    FaultyWarrior reacted to 79wjd in Large Parallel Streaming Writes: PCIe SSD vs HDD RAID   
    As you said, you're already limited by the internet connection anyway, so going with an SSD wouldn't improve speeds. So would it be better......well, in the sense that it's simpler and SSD-based (so lower latency), yes, but it's also less practical due to costing significanty more and/or having significantly less space. 
  6. Informative
    FaultyWarrior got a reaction from leadeater in Large Parallel Streaming Writes: PCIe SSD vs HDD RAID   
    Alright - more info on the setup:
     
    The setup is a mix of hardware & software RAID.  I have 8 arrays of 7 drives in hardware RAID-5 - each array has 512MB of cache, plus each drive has 8MB onboard.  Each array is connected to the host server via 2 gigabit fibre channel.  The 8 logical "drives" are then put in a software RAID-0 within macOS's disk utilities.  The host machine is a quad-core G5 (older, but still plenty fast as a file server) with 16GB of RAM, 2 SSDs in software RAID-1 for the OS, and a Myricom 10GbE card.  File sharing is handled via NFS, so no issues with that.
     
    The drives themselves are IDE (I said this was cheap, didn't I?), but with 7 drives on a 2 gigabit FC link, the individual drive speed is effectively nulled out as the link is saturated (200MB/s after overhead)
     
    I don't know how much the hardware RAID, RAM caching, and other things plays in masking any latencies; but I've yet to drop any frames even with 4 streams recording.
  7. Like
    FaultyWarrior got a reaction from Techstorm970 in My New "PC"   
    Yeah, I went back and fixed it...it was actually just a tech quickie ep, not a fast as possible.
  8. Agree
    FaultyWarrior reacted to nicklmg in Acer Predator 21x Review   
    It's also a pretty easy mistake to make, and a pretty easy one to miss when reviewing graphs.

    Not saying that we shouldn't have caught it, we definitely should have.
  9. Informative
    FaultyWarrior got a reaction from Bittenfleax in The Dark Arts of eBay Networking Gear   
    ***This post is work-in-progress, isn't 100% complete, and is surely littered with spelling and grammatical errors.  I'll be editing it as I put it together.  Feel free to contribute any info you see fit!***
     
    I was originally planning this as a page on my website, but after Linus's video on the cheap 10Gb gear last night (here if you've not seen it), I figured I might as well just post it to the LTT forums instead, since I'm sure there are some other network aficionados who'd benefit from it.  I'm also working on a video series on all of this for those who prefer that method of content consumption.  My channel is here, and the playlist of this stuff is here.
     
    As the title suggests, this thread is to diverge into the (apparent) dark arts of second-hand, but still very usable high-end networking gear that can be had for very little on eBay, along my trials and tribulations as I move my network from an aging CAT5e infrastructure to multi-mode fibre & CAT 7.  It's broken down by item type to make it a bit easier to navigate. {note - if people have an interest in me expanding this to other formally very expensive, but now worthless enterprise hardware like older SAN gear, let me know...I've got plenty to go off there!}

    Setup & Config:
    My network has 5 nodes - my daily driver Mac Pro workstation, my Sun Microsystem Ultra 40 M2 hardware development workstation, my PowerMac G5 "Quad" SAN filer; my Intel OEM-built backup server; and my Dell PowerEdge 2850 pfSense router.  Eventually my parent's Mac Mini, along with my XServe G5 NAS will gain NICs and be added on here as well. (they'll both just have to limp along on gigabit for now)
     
    I have the 4 copper gigabit ports of the 10Gb switch in an LACP group and connected to the first 4 ports on a 48-port Netgear copper gigabit switch, so I have plenty of gigabit ports to work with as well.
     
    All devices had Jumbo Frames enabled and set to 9000.  No other settings were tweaked since I don't fully understand what commands do what just yet (mostly CLI based with piss-poor documentation)
    If you're looking for a really in-dept look at my network layout and all of the components, see my YT channel for videos...far too much stuff to try and stuff into a post.

    Below are my first real-world testing with my hardware.  Do note that all of this testing is done using RAM drives, since outside of my 56-drive SAN pool; nothing else I own could keep up with 10 Gigabit.  If you plan to use SoftPerfect's tool that Linus recommended, you'll have to grab the older version off MajorGeeks since they changed the licensing to paid on the latest version (lame!)
     
    So - here are the results:
    NFS transfers from my Mac Pro workstation to my Sun Ultra 40 M2 workstation (Booted into CentOS 7) was a near-perfect pegging of the connection.
    AFP transfers from my Mac Pro workstation to my SAN also pegged the connection
     
    Now for the downer.  When trying SMB shares, I saw the same problems Linus saw.  From my Mac Pro to my Windows 10-based Backup Server, the transfter sputtered out at around 300MB/s.  Booting the same server into a live Linux distro and using NFS pegged the connection, so again proving that SMB is just not designed by default to handle this kind of bandwith.  While this isn't a big deal for me, since my backup server is the only Windows box on the network (Because BackBlaze nor DropBox have Linux clients!), it'll just mean that if I were to ever have to restore data from this machine that it wouldn't go nearly as fast as it could.
     
    As for Internet performance, this setup obviously didn't make a difference.  My mediocre 150/20 connection isn't going to magically get faster by upgrading my network.  I put that here because this seems to be something people are getting mixed up with Linus's videos.  LAN speed and WAN speed are VERY different things.  The exception is for those of us lucky enough to live in an area where either FTTH or MetroE is available.  Upgrading to 10Gb internally could make a difference, although the case for that is VERY limited.  Unless you have Comcast's "Gigabit Pro" 2Gb/s residential MetroE service (FWIW, I'll have it by the end of the year, so I'll be able to give feedback on that soon!), or you're able to shell out thousands of dollars a month for an enterprise MetroE connection from any of the major carriers, anything faster than gigabit won't help your internet - even Google Fiber tops out at 1Gb.  On a quick side-note - based on my readings, for those with Gigabit Pro, you actually get a 10Gb link - the Juniper access router they provide throttles you to 2Gb/s; so in the future they'll be able to provision higher speeds.
     
    Now for the actual hardware I'm running:
     
    Let's start with the switch, since it's really the first step.  While you *CAN* forgo this and do what Linus did initially - daisy-chaining machines to each-other with dual-port NICs (see video here); with switches this cheap, there isn't much of a point in this anymore.
     
    My experience is with the Quanta LB6M switch that Linus showed in his video yesterday.  I bought one for my day job about 6 months ago.  We toyed with it for a few hours, then shelved it since we needed to wait until we got our 2017 budget to make the building infrastructure changes to support it (bleh).  After my initial experiences with the one at work, when it came time to start revamping my home network, I picked up the same switch for myself about 2 weeks ago.  The switch itself is pretty awesome.  It's got TONS of options, however the documentation is abominable.  There's a few random forum threads and the manual for a different model which uses the same CLI, although it's very much an "enter command and hope it works" kind of thing.  Once you get past this and start figuring things out, it's very good...perhaps not on the level of Cisco or Juniper, but a pretty damn close second IMO.
     
    Now for the elephant in the room - the interconnect part of this.
     
    Everyone seems to give SFP+ stuff a bad rap, saying it's hard to work with and expensive.  It's a mixed bag IMO.  What ever you do, avoid direct-attached copper cables at all costs.  THOSE are expensive and a bitch to work with due to their physical thickness and large connectors.  For old hands at this kind of stuff, it's only challenger for "impossible to work with" wiring that can match direct-attach copper cabling is 10BASE5 thicknet.  The alternative (and preferred medium) is fibre optics.  On the NIC side, it's WAY cheaper than 10 Gigabit RJ45 stuff, and while not as easy to pull through a building and/or terminate as CAT6A or CAT7;  there is viable workaround to that issue.  I bought pre-made "patch cords" of roughly the correct length, and then used LC keystone jacks on both sides.  While this could result in situations where you have a random bundle of extra cable you have to hide somewhere, if you can't swing the cost to buy a fibre termination kit or to call in a professional to do it (in which case you can probably afford to just skip this thread entirely and get RJ45 stuff); it's a very viable solution for a home environment.
     
    To tag off that, pre-made fiber cables are dirt-cheap - about $1.30 (USD) per meter, give or take a few cents.  For the dozen 3 meter (a hair shy of 10 feet) cables I bought, the total came to ~$46.  For reference, MonoPrice charges ~$1.50 per meter for pre-made CAT6A cables, so price-wise you're not doing too bad.

    SFP+ optical transceivers are equally cheap - $10-$20 (USD) on average, including shipping.  Mine are a mix of Myricom and JDSU units.  All of them are short-reach 850nm units, since even my in-wall runs are at-most 30 meters end-to-end. (SR is rated for 300 meters).

    Now the NICs.  Mine are older Myricom units.  Specifically, 10G-PCIE-8B-S.  They're readily available for $60-$100 on eBay.  I chose these for not only their cost, but for their EXCELLENT driver support.  They've got drivers available for pretty much any OS from the last decade, and if they don't have what you need, you can just grab the source code package and compile the drivers yourself.  (This will come in handy when trying to get the card in my Sun workstation working under Solaris.)
     
    I haven't tried the cards under Windows XP, however I've found presentation slides that suggest it works.  Once I get XP loaded on the Sun workstation I'll give it a shot and post my results; although outside of fringe cases like mine where I use XP for its compatibility when doing hardware development and debugging, I don't think anyone else will be realisticlly still running it at this point; especially in an instance where they'll want a 10 Gigabit connection to said machine.
  10. Like
    FaultyWarrior got a reaction from oskarha in The Dark Arts of eBay Networking Gear   
    ***This post is work-in-progress, isn't 100% complete, and is surely littered with spelling and grammatical errors.  I'll be editing it as I put it together.  Feel free to contribute any info you see fit!***
     
    I was originally planning this as a page on my website, but after Linus's video on the cheap 10Gb gear last night (here if you've not seen it), I figured I might as well just post it to the LTT forums instead, since I'm sure there are some other network aficionados who'd benefit from it.  I'm also working on a video series on all of this for those who prefer that method of content consumption.  My channel is here, and the playlist of this stuff is here.
     
    As the title suggests, this thread is to diverge into the (apparent) dark arts of second-hand, but still very usable high-end networking gear that can be had for very little on eBay, along my trials and tribulations as I move my network from an aging CAT5e infrastructure to multi-mode fibre & CAT 7.  It's broken down by item type to make it a bit easier to navigate. {note - if people have an interest in me expanding this to other formally very expensive, but now worthless enterprise hardware like older SAN gear, let me know...I've got plenty to go off there!}

    Setup & Config:
    My network has 5 nodes - my daily driver Mac Pro workstation, my Sun Microsystem Ultra 40 M2 hardware development workstation, my PowerMac G5 "Quad" SAN filer; my Intel OEM-built backup server; and my Dell PowerEdge 2850 pfSense router.  Eventually my parent's Mac Mini, along with my XServe G5 NAS will gain NICs and be added on here as well. (they'll both just have to limp along on gigabit for now)
     
    I have the 4 copper gigabit ports of the 10Gb switch in an LACP group and connected to the first 4 ports on a 48-port Netgear copper gigabit switch, so I have plenty of gigabit ports to work with as well.
     
    All devices had Jumbo Frames enabled and set to 9000.  No other settings were tweaked since I don't fully understand what commands do what just yet (mostly CLI based with piss-poor documentation)
    If you're looking for a really in-dept look at my network layout and all of the components, see my YT channel for videos...far too much stuff to try and stuff into a post.

    Below are my first real-world testing with my hardware.  Do note that all of this testing is done using RAM drives, since outside of my 56-drive SAN pool; nothing else I own could keep up with 10 Gigabit.  If you plan to use SoftPerfect's tool that Linus recommended, you'll have to grab the older version off MajorGeeks since they changed the licensing to paid on the latest version (lame!)
     
    So - here are the results:
    NFS transfers from my Mac Pro workstation to my Sun Ultra 40 M2 workstation (Booted into CentOS 7) was a near-perfect pegging of the connection.
    AFP transfers from my Mac Pro workstation to my SAN also pegged the connection
     
    Now for the downer.  When trying SMB shares, I saw the same problems Linus saw.  From my Mac Pro to my Windows 10-based Backup Server, the transfter sputtered out at around 300MB/s.  Booting the same server into a live Linux distro and using NFS pegged the connection, so again proving that SMB is just not designed by default to handle this kind of bandwith.  While this isn't a big deal for me, since my backup server is the only Windows box on the network (Because BackBlaze nor DropBox have Linux clients!), it'll just mean that if I were to ever have to restore data from this machine that it wouldn't go nearly as fast as it could.
     
    As for Internet performance, this setup obviously didn't make a difference.  My mediocre 150/20 connection isn't going to magically get faster by upgrading my network.  I put that here because this seems to be something people are getting mixed up with Linus's videos.  LAN speed and WAN speed are VERY different things.  The exception is for those of us lucky enough to live in an area where either FTTH or MetroE is available.  Upgrading to 10Gb internally could make a difference, although the case for that is VERY limited.  Unless you have Comcast's "Gigabit Pro" 2Gb/s residential MetroE service (FWIW, I'll have it by the end of the year, so I'll be able to give feedback on that soon!), or you're able to shell out thousands of dollars a month for an enterprise MetroE connection from any of the major carriers, anything faster than gigabit won't help your internet - even Google Fiber tops out at 1Gb.  On a quick side-note - based on my readings, for those with Gigabit Pro, you actually get a 10Gb link - the Juniper access router they provide throttles you to 2Gb/s; so in the future they'll be able to provision higher speeds.
     
    Now for the actual hardware I'm running:
     
    Let's start with the switch, since it's really the first step.  While you *CAN* forgo this and do what Linus did initially - daisy-chaining machines to each-other with dual-port NICs (see video here); with switches this cheap, there isn't much of a point in this anymore.
     
    My experience is with the Quanta LB6M switch that Linus showed in his video yesterday.  I bought one for my day job about 6 months ago.  We toyed with it for a few hours, then shelved it since we needed to wait until we got our 2017 budget to make the building infrastructure changes to support it (bleh).  After my initial experiences with the one at work, when it came time to start revamping my home network, I picked up the same switch for myself about 2 weeks ago.  The switch itself is pretty awesome.  It's got TONS of options, however the documentation is abominable.  There's a few random forum threads and the manual for a different model which uses the same CLI, although it's very much an "enter command and hope it works" kind of thing.  Once you get past this and start figuring things out, it's very good...perhaps not on the level of Cisco or Juniper, but a pretty damn close second IMO.
     
    Now for the elephant in the room - the interconnect part of this.
     
    Everyone seems to give SFP+ stuff a bad rap, saying it's hard to work with and expensive.  It's a mixed bag IMO.  What ever you do, avoid direct-attached copper cables at all costs.  THOSE are expensive and a bitch to work with due to their physical thickness and large connectors.  For old hands at this kind of stuff, it's only challenger for "impossible to work with" wiring that can match direct-attach copper cabling is 10BASE5 thicknet.  The alternative (and preferred medium) is fibre optics.  On the NIC side, it's WAY cheaper than 10 Gigabit RJ45 stuff, and while not as easy to pull through a building and/or terminate as CAT6A or CAT7;  there is viable workaround to that issue.  I bought pre-made "patch cords" of roughly the correct length, and then used LC keystone jacks on both sides.  While this could result in situations where you have a random bundle of extra cable you have to hide somewhere, if you can't swing the cost to buy a fibre termination kit or to call in a professional to do it (in which case you can probably afford to just skip this thread entirely and get RJ45 stuff); it's a very viable solution for a home environment.
     
    To tag off that, pre-made fiber cables are dirt-cheap - about $1.30 (USD) per meter, give or take a few cents.  For the dozen 3 meter (a hair shy of 10 feet) cables I bought, the total came to ~$46.  For reference, MonoPrice charges ~$1.50 per meter for pre-made CAT6A cables, so price-wise you're not doing too bad.

    SFP+ optical transceivers are equally cheap - $10-$20 (USD) on average, including shipping.  Mine are a mix of Myricom and JDSU units.  All of them are short-reach 850nm units, since even my in-wall runs are at-most 30 meters end-to-end. (SR is rated for 300 meters).

    Now the NICs.  Mine are older Myricom units.  Specifically, 10G-PCIE-8B-S.  They're readily available for $60-$100 on eBay.  I chose these for not only their cost, but for their EXCELLENT driver support.  They've got drivers available for pretty much any OS from the last decade, and if they don't have what you need, you can just grab the source code package and compile the drivers yourself.  (This will come in handy when trying to get the card in my Sun workstation working under Solaris.)
     
    I haven't tried the cards under Windows XP, however I've found presentation slides that suggest it works.  Once I get XP loaded on the Sun workstation I'll give it a shot and post my results; although outside of fringe cases like mine where I use XP for its compatibility when doing hardware development and debugging, I don't think anyone else will be realisticlly still running it at this point; especially in an instance where they'll want a 10 Gigabit connection to said machine.
  11. Informative
    FaultyWarrior got a reaction from Gazzony in The Dark Arts of eBay Networking Gear   
    ***This post is work-in-progress, isn't 100% complete, and is surely littered with spelling and grammatical errors.  I'll be editing it as I put it together.  Feel free to contribute any info you see fit!***
     
    I was originally planning this as a page on my website, but after Linus's video on the cheap 10Gb gear last night (here if you've not seen it), I figured I might as well just post it to the LTT forums instead, since I'm sure there are some other network aficionados who'd benefit from it.  I'm also working on a video series on all of this for those who prefer that method of content consumption.  My channel is here, and the playlist of this stuff is here.
     
    As the title suggests, this thread is to diverge into the (apparent) dark arts of second-hand, but still very usable high-end networking gear that can be had for very little on eBay, along my trials and tribulations as I move my network from an aging CAT5e infrastructure to multi-mode fibre & CAT 7.  It's broken down by item type to make it a bit easier to navigate. {note - if people have an interest in me expanding this to other formally very expensive, but now worthless enterprise hardware like older SAN gear, let me know...I've got plenty to go off there!}

    Setup & Config:
    My network has 5 nodes - my daily driver Mac Pro workstation, my Sun Microsystem Ultra 40 M2 hardware development workstation, my PowerMac G5 "Quad" SAN filer; my Intel OEM-built backup server; and my Dell PowerEdge 2850 pfSense router.  Eventually my parent's Mac Mini, along with my XServe G5 NAS will gain NICs and be added on here as well. (they'll both just have to limp along on gigabit for now)
     
    I have the 4 copper gigabit ports of the 10Gb switch in an LACP group and connected to the first 4 ports on a 48-port Netgear copper gigabit switch, so I have plenty of gigabit ports to work with as well.
     
    All devices had Jumbo Frames enabled and set to 9000.  No other settings were tweaked since I don't fully understand what commands do what just yet (mostly CLI based with piss-poor documentation)
    If you're looking for a really in-dept look at my network layout and all of the components, see my YT channel for videos...far too much stuff to try and stuff into a post.

    Below are my first real-world testing with my hardware.  Do note that all of this testing is done using RAM drives, since outside of my 56-drive SAN pool; nothing else I own could keep up with 10 Gigabit.  If you plan to use SoftPerfect's tool that Linus recommended, you'll have to grab the older version off MajorGeeks since they changed the licensing to paid on the latest version (lame!)
     
    So - here are the results:
    NFS transfers from my Mac Pro workstation to my Sun Ultra 40 M2 workstation (Booted into CentOS 7) was a near-perfect pegging of the connection.
    AFP transfers from my Mac Pro workstation to my SAN also pegged the connection
     
    Now for the downer.  When trying SMB shares, I saw the same problems Linus saw.  From my Mac Pro to my Windows 10-based Backup Server, the transfter sputtered out at around 300MB/s.  Booting the same server into a live Linux distro and using NFS pegged the connection, so again proving that SMB is just not designed by default to handle this kind of bandwith.  While this isn't a big deal for me, since my backup server is the only Windows box on the network (Because BackBlaze nor DropBox have Linux clients!), it'll just mean that if I were to ever have to restore data from this machine that it wouldn't go nearly as fast as it could.
     
    As for Internet performance, this setup obviously didn't make a difference.  My mediocre 150/20 connection isn't going to magically get faster by upgrading my network.  I put that here because this seems to be something people are getting mixed up with Linus's videos.  LAN speed and WAN speed are VERY different things.  The exception is for those of us lucky enough to live in an area where either FTTH or MetroE is available.  Upgrading to 10Gb internally could make a difference, although the case for that is VERY limited.  Unless you have Comcast's "Gigabit Pro" 2Gb/s residential MetroE service (FWIW, I'll have it by the end of the year, so I'll be able to give feedback on that soon!), or you're able to shell out thousands of dollars a month for an enterprise MetroE connection from any of the major carriers, anything faster than gigabit won't help your internet - even Google Fiber tops out at 1Gb.  On a quick side-note - based on my readings, for those with Gigabit Pro, you actually get a 10Gb link - the Juniper access router they provide throttles you to 2Gb/s; so in the future they'll be able to provision higher speeds.
     
    Now for the actual hardware I'm running:
     
    Let's start with the switch, since it's really the first step.  While you *CAN* forgo this and do what Linus did initially - daisy-chaining machines to each-other with dual-port NICs (see video here); with switches this cheap, there isn't much of a point in this anymore.
     
    My experience is with the Quanta LB6M switch that Linus showed in his video yesterday.  I bought one for my day job about 6 months ago.  We toyed with it for a few hours, then shelved it since we needed to wait until we got our 2017 budget to make the building infrastructure changes to support it (bleh).  After my initial experiences with the one at work, when it came time to start revamping my home network, I picked up the same switch for myself about 2 weeks ago.  The switch itself is pretty awesome.  It's got TONS of options, however the documentation is abominable.  There's a few random forum threads and the manual for a different model which uses the same CLI, although it's very much an "enter command and hope it works" kind of thing.  Once you get past this and start figuring things out, it's very good...perhaps not on the level of Cisco or Juniper, but a pretty damn close second IMO.
     
    Now for the elephant in the room - the interconnect part of this.
     
    Everyone seems to give SFP+ stuff a bad rap, saying it's hard to work with and expensive.  It's a mixed bag IMO.  What ever you do, avoid direct-attached copper cables at all costs.  THOSE are expensive and a bitch to work with due to their physical thickness and large connectors.  For old hands at this kind of stuff, it's only challenger for "impossible to work with" wiring that can match direct-attach copper cabling is 10BASE5 thicknet.  The alternative (and preferred medium) is fibre optics.  On the NIC side, it's WAY cheaper than 10 Gigabit RJ45 stuff, and while not as easy to pull through a building and/or terminate as CAT6A or CAT7;  there is viable workaround to that issue.  I bought pre-made "patch cords" of roughly the correct length, and then used LC keystone jacks on both sides.  While this could result in situations where you have a random bundle of extra cable you have to hide somewhere, if you can't swing the cost to buy a fibre termination kit or to call in a professional to do it (in which case you can probably afford to just skip this thread entirely and get RJ45 stuff); it's a very viable solution for a home environment.
     
    To tag off that, pre-made fiber cables are dirt-cheap - about $1.30 (USD) per meter, give or take a few cents.  For the dozen 3 meter (a hair shy of 10 feet) cables I bought, the total came to ~$46.  For reference, MonoPrice charges ~$1.50 per meter for pre-made CAT6A cables, so price-wise you're not doing too bad.

    SFP+ optical transceivers are equally cheap - $10-$20 (USD) on average, including shipping.  Mine are a mix of Myricom and JDSU units.  All of them are short-reach 850nm units, since even my in-wall runs are at-most 30 meters end-to-end. (SR is rated for 300 meters).

    Now the NICs.  Mine are older Myricom units.  Specifically, 10G-PCIE-8B-S.  They're readily available for $60-$100 on eBay.  I chose these for not only their cost, but for their EXCELLENT driver support.  They've got drivers available for pretty much any OS from the last decade, and if they don't have what you need, you can just grab the source code package and compile the drivers yourself.  (This will come in handy when trying to get the card in my Sun workstation working under Solaris.)
     
    I haven't tried the cards under Windows XP, however I've found presentation slides that suggest it works.  Once I get XP loaded on the Sun workstation I'll give it a shot and post my results; although outside of fringe cases like mine where I use XP for its compatibility when doing hardware development and debugging, I don't think anyone else will be realisticlly still running it at this point; especially in an instance where they'll want a 10 Gigabit connection to said machine.
  12. Agree
    FaultyWarrior reacted to Electronics Wizardy in DiskWarrior Help   
    use ddrescue, its made for this, it also shows progress.
×