Jump to content

Recommendation on Starting up a 10gbit internalized network

Hello everybody, 

 

I'm new to the forums but like many of you, I've been following Linus and his crew for the past couple years.

 

Anyhow, I do have a few questions regarding setting up a 10gbit network. My first question would be, should I go with Cat6A or fiber? I was leaning towards CAT6A since the runs won't be long, in fact, they pretty much won't be leaving a single room. I will only need to connect a few (3-4 machines) together to aid in large file transfers.

 

The machines I would be looking at getting hooked up would be:

- Custom built server (x58 platform, this is the primary machine all the others will be accessing, it also has a Drobo for backup connected via eSATA on a PCI-Express card)

- Mac Pro (2009 model)

- iMac (2011 model, likely will have to connect with a thunderbolt to some PCI-Express converter)

- Custom built "gaming" machine that will be turning into a workstation soon

 

Here's my rationale:

In my spare time, I'm a photographer and often do video work so my files can get HUGE (we're talking in the hundreds of gigabytes). Backing up or working off of a server over gigabit is starting to tax my network really hard when working with RAW image files, and especially with Uncompressed ProRes footage. If one machine is sending data or performing a backup to the server, and another is also trying to send data or even receive data, there are major slow downs, and this seems to be primarily a network bottleneck since the server's CPU is generally under 10% utilization, disk utilization is around 15-20% but the network is showing 700-900mbit/s+ going in either direction. 

 

So I'm wondering if anybody has had any experience in setting this kind of a network up. I'm probably going to need about 4 10gbit network cards, CAT6A cabling, and a 10gbit switch, right? Would there be any benefit in using fiber in this kind of environment (scalability, latency, etc)? My house is already wired for gigabit with CAT5e in every room, and I haven't had any real issues anywhere else so I'm leaving everything else alone. 

 

 

Thank you all so much in advance for not only reading my novel but for any advice! :)

 

--

Bryan

Link to comment
Share on other sites

Link to post
Share on other sites

While you can do 10GBASE-T for 100m over CAT6a, fibre is usually more dependable and will go up to higher speeds, as new control gear comes out.


CPU: Intel i5 4570 | Cooler: Cooler Master TPC 812 | Motherboard: ASUS H87M-PRO | RAM: G.Skill 16GB (4x4GB) @ 1600MHZ | Storage: OCZ ARC 100 480GB, WD Caviar Black 2TB, Caviar Blue 1TB | GPU: Gigabyte GTX 970 | ODD: ASUS BC-12D2HT BR Reader | PSU: Cooler Master V650 | Display: LG IPS234 | Keyboard: Logitech G710+ | Mouse: Logitech G602 | Audio: Logitech Z506 & Audio Technica M50X | My machine: https://nz.pcpartpicker.com/b/JoJ

Link to comment
Share on other sites

Link to post
Share on other sites

Personally I would go for fibre, for future proofing as fibre has been tested for above 2 petabytes. The switches and NICs will be a lot cheaper for Cat6a though, I imagine.

Gamer, Programmer and Server Administrator

all with a HP laptop <3

 

Link to comment
Share on other sites

Link to post
Share on other sites

10Gbps NICs are expensive ($350 each for a good one) and IMO no point unless you go fiber (pricey) also there's no way your NAS is pushing past 2Gbps unless you have a SSD cache. So I say get a few dual gigabit Intel NICs and put two of them your server and just one in your PCs then get a switch like the Cisco SG300-20 and enable trunking on both the NICs and the switch (obviously). If you plan on spend that much money on a network you might as well make it balanced, something along the lines of an ERLite-3, SG300-20, and a DAP-2695 also if you plan on running the wires though the wall I recommend getting wall jacks and a patch panel that use 110D IDC (press fit) connectors to terminate the wires.

Mein Führer... I CAN WALK !!

Link to comment
Share on other sites

Link to post
Share on other sites

10Gbps NICs are expensive ($350 each for a good one) and IMO no point unless you go fiber (pricey) also there's no way your NAS is pushing past 2Gbps unless you have a SSD cache. So I say get a few dual gigabit Intel NICs and put two of them your server and just one in your PCs then get a switch like the Cisco SG300-20 and enable trunking on both the NICs and the switch (obviously).

 

This isn't a NAS, it's an actual server, and I currently have 8 4TB drives in it with an LSI RAID card. With Crystal Mark, I've seen results just shy of 1GB/s reading, and 800MB/s or so writing. A single gigabit connection only yields around 90-100MB/s (real world). During renders, the performance falls even more. I know this upgrade won't be cheap, but for 10x the performance benefit, it does seem worth it.

 

Also, for reference, the server has an SSD for boot, and the RAID array is strictly for data storage.

Link to comment
Share on other sites

Link to post
Share on other sites

This isn't a NAS, it's an actual server, and I currently have 8 4TB drives in it with an LSI RAID card. With Crystal Mark, I've seen results just shy of 1GB/s reading, and 800MB/s or so writing. A single gigabit connection only yields around 90-100MB/s (real world). During renders, the performance falls even more. I know this upgrade won't be cheap, but for 10x the performance benefit, it does seem worth it.

 

Also, for reference, the server has an SSD for boot, and the RAID array is strictly for data storage.

The problem with 10Gbps is that your going to be paying into the thousands for a switch so the best method would be to put a single SFP+ port 10Gbps NIC in both the server and rendering PC then connect them directly then have everything else connect via the network.

Mein Führer... I CAN WALK !!

Link to comment
Share on other sites

Link to post
Share on other sites

Id go for a single 10gbps connection to the workstation, althought a switches are availibe for under 1k for example the netgear xs708e.

or you go for just 4gbps fibre wich is cheeper or probibly 8gbps

i am not a native speaker of the english language

[spoiler=My Rig: ]CPU: i7-3770k@Stock | Ram: 3x4GB@1600Mhz | Graka: 660TI@Stock | Storage: 250GB 840Evo, 1x1TB,2x2TB,2x640GB,1x500GB (JBOD) + NAS: DLINK DNS-320 2x3TB Raid1

 
Link to comment
Share on other sites

Link to post
Share on other sites

If I was going down this route I'd probably go with copper, because the cables are cheap. With the distances you're talking about you probably only need Cat6 for 10Gbps. Just doing a quick google you can get a smart, 8 port 10Gbps switch which supports Link Aggregation (uihfipueriuh!! 10Gbps Link Aggregation!!) for ~$800. Which is insane given how much it was a couple of years ago. You can get 10GBASE-T NICs for as little as ~$200. It definitely adds up quickly but if you have the hardware already, need the performance AND can budget for a couple of grand..... why not?

Fools think they know everything, experts know they know nothing

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×