Jump to content

What NICs should i buy?

jackm1120

Im looking to buy some dual or quad port PCIe based nics second hand off ebay in order to speed up my LAN speed.  Has anyone fond good 4 port nics that arent too pricey which support load ballencing accross all ports? Also has im worried about driver support for these products on windows 10.  Any help would be appreciated

 

Thanks Everyone

Link to comment
Share on other sites

Link to post
Share on other sites

Is this based on the video Linus made? If so, I wouldn't advise doing it unless you're specifically wanting increased bandwidth (not speed, FYI) to a server. If you're wanting to connect it to a switch, then you need to make such it supports some form of link aggregation protocol and has enough ports to hook up other devices as well. 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes it is, my house containts over 15 stand alone computers and about 7 TVs each with their own media PC.  My server is based on a z87 g1 sniper 5 mtoherboard and is stuck at gigabit speed.  I need to speed this up and want faster file transfering from one computer to the next

Link to comment
Share on other sites

Link to post
Share on other sites

I'd start with logging your usage to see if you actually max out your current connection. Depending on your quality of movies you are streaming, even going to 7 media PCs at once should only use about half of a gigabit link. Plus, unless you are storing all of your media on SSD's the hard drives will quickly be the next bottleneck. Just throwing it out there so you don't spend money unless you actually need to. For the record, even though I am telling you that you probably shouldn't do it I have teamed links running to a few of my computers and servers... It was completely unnecessary, but I wanted to do it. :P

 

As far as I am aware, Intel Pro 1000's are the go-to for used quad port cards, and they do support load balancing. To save you the issues that Linus had I would recommend getting a smart switch and doing proper Link Aggregation/Teaming. TP Link has a fairly cheap line of "Easy Smart" switches that support LACP. For a bit more money you can go with a nice Ubiquti switch like the ES-48-Lite. Note that with LACP you will still only get 1gbps to each computer, but it will give you the ability to do 1gpbs each to four computers with a quad port card.

 

I'm not sure where the line is, but my old server had a Xeon L5430 cpu (4 core hyperthreaded at 2.13ghz, so not too low end) and I noticed file transfers would occasionally be much slower than they should be. I did some testing and saturated the connections with transfers from RAMdisks, the CPU was pegged at 100%. I ended up upgrading to a new platform and a e5-1620v2 and it runs much better. I had no idea how much CPU power large file transfers took.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Scheer said:

I'd start with logging your usage to see if you actually max out your current connection. Depending on your quality of movies you are streaming, even going to 7 media PCs at once should only use about half of a gigabit link. Plus, unless you are storing all of your media on SSD's the hard drives will quickly be the next bottleneck. Just throwing it out there so you don't spend money unless you actually need to. For the record, even though I am telling you that you probably shouldn't do it I have teamed links running to a few of my computers and servers... It was completely unnecessary, but I wanted to do it. :P

 

As far as I am aware, Intel Pro 1000's are the go-to for used quad port cards, and they do support load balancing. To save you the issues that Linus had I would recommend getting a smart switch and doing proper Link Aggregation/Teaming. TP Link has a fairly cheap line of "Easy Smart" switches that support LACP. For a bit more money you can go with a nice Ubiquti switch like the ES-48-Lite. Note that with LACP you will still only get 1gbps to each computer, but it will give you the ability to do 1gpbs each to four computers with a quad port card.

 

I'm not sure where the line is, but my old server had a Xeon L5430 cpu (4 core hyperthreaded at 2.13ghz, so not too low end) and I noticed file transfers would occasionally be much slower than they should be. I did some testing and saturated the connections with transfers from RAMdisks, the CPU was pegged at 100%. I ended up upgrading to a new platform and a e5-1620v2 and it runs much better. I had no idea how much CPU power large file transfers took.

I want to be able to achieve 4gb/s from PC to PC, not 1. What switch would you reccomend in order to accomplish this? Linus claimed any 24 port switch should be fine but I want to be sure before I make a purchase 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, jackm1120 said:

I want to be able to achieve 4gb/s from PC to PC, not 1. What switch would you reccomend in order to accomplish this? Linus claimed any 24 port switch should be fine but I want to be sure before I make a purchase 

You'll want a switch that supports 802.3ad link agregation as well as an OS that supports LAG too.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Windspeed36 said:

You'll want a switch that supports 802.3ad link agregation as well as an OS that supports LAG too.

I thought that LAG wasn't able to give you more than 1gbps point to point though, and was only useful for multiple clients hitting one server?

 

I could be confusing myself, just wanted to check.

 

 

OP, once you factor in running three more cables to each computer (assuming you have finished walls) you may as well just bite the bullet now and go 10gbe...

 

Switch: http://www.amazon.com/dp/B00B46AEE6/

Example NIC: http://www.ebay.com/itm/261646935568?

Link to comment
Share on other sites

Link to post
Share on other sites

 

27 minutes ago, Scheer said:

I thought that LAG wasn't able to give you more than 1gbps point to point though, and was only useful for multiple clients hitting one server?

 

I could be confusing myself, just wanted to check.

 

 

OP, once you factor in running three more cables to each computer (assuming you have finished walls) you may as well just bite the bullet now and go 10gbe...

 

Switch: http://www.amazon.com/dp/B00B46AEE6/

Example NIC: http://www.ebay.com/itm/261646935568?

It won't make an individual file transfer faster but it'll increase the speed of multiple provided the storage mediums at both ends can comply as well as the infastructure along the way.

 

1 file transfer of 50GB is still going to go at 112MB/s however provided the sending and recieving storage is fast enough as well as both connections are atleast dual gigabit plus any interconnect in the middle, you'll see speeds of 112MB/s PER file transfer if you've got 2 50GB files. You will NOT see 225MB/s file transfers if you've got 2 gigabit lines in link ag.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Windspeed36 said:

 

It won't make an individual file transfer faster but it'll increase the speed of multiple provided the storage mediums at both ends can comply as well as the infastructure along the way.

 

1 file transfer of 50GB is still going to go at 112MB/s however provided the sending and recieving storage is fast enough as well as both connections are atleast dual gigabit plus any interconnect in the middle, you'll see speeds of 112MB/s PER file transfer if you've got 2 50GB files. You will NOT see 225MB/s file transfers if you've got 2 gigabit lines in link ag.

See this is the problem I want speeds of over 400mb/s for a single file transfer

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, jackm1120 said:

See this is the problem I want speeds of over 400mb/s for a single file transfer

Then you must have 10gig.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Windspeed36 said:

Then you must have 10gig.

 

 

then explain how this is possible?

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, jackm1120 said:

 

 

then explain how this is possible?

I'm at work at the moment so I can't watch that video as I have not yet seen it however he's probably bound the addresses to be seen as one interface in a direct attached environment - similar to having a USB cable between point A and point B. This does not work in a standard environment as you'll have no network access through those NIC's.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Windspeed36 said:

I'm at work at the moment so I can't watch that video as I have not yet seen it however he's probably bound the addresses to be seen as one interface in a direct attached environment - similar to having a USB cable between point A and point B. This does not work in a standard environment as you'll have no network access through those NIC's.

He is using SMB 3.0 to do it.

1 hour ago, jackm1120 said:

then explain how this is possible?

You should also note all of the troubles they had just to somewhat get it working for a few transfers, this isn't something you plug in and it just works. I realllllllly don't think it is worth the trouble to do this for actual use, either stick with standard 1gbe or jump to 10gbe.

1 hour ago, Windspeed36 said:

 

It won't make an individual file transfer faster but it'll increase the speed of multiple provided the storage mediums at both ends can comply as well as the infastructure along the way.

 

1 file transfer of 50GB is still going to go at 112MB/s however provided the sending and recieving storage is fast enough as well as both connections are atleast dual gigabit plus any interconnect in the middle, you'll see speeds of 112MB/s PER file transfer if you've got 2 50GB files. You will NOT see 225MB/s file transfers if you've got 2 gigabit lines in link ag.

 

 

Ahhh, there was my confusion! I was thinking the of number of computers connecting, not number of files being transferred. Thanks for the clarification.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Scheer said:

Ahhh, there was my confusion! I was thinking the of number of computers connecting, not number of files being transferred. Thanks for the clarification.

Well it can be if each client is only gigabit - eg 4 gigabit PC's talking to a server running LAG on quad port NIC. My example used 1 PC running a dual NIC talking to a server running a dual NIC, both supporting LAG through a switch that supports LAG. It'll slow to the slowest common denomenator - normally the recieving hosts connection.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Windspeed36 said:

Well it can be if each client is only gigabit - eg 4 gigabit PC's talking to a server running LAG on quad port NIC. My example used 1 PC running a dual NIC talking to a server running a dual NIC, both supporting LAG through a switch that supports LAG. It'll slow to the slowest common denomenator - normally the recieving hosts connection.

So if I grab 3 quad port nics and throw them into my gaming PCs in the LAN room connected to a 16 port switch I should be able to achieve transfer speeds of 400mb/s with each PC using SSDs in raid 0 right? I'm only worried about 1 that's using a 4690k because it only has 4 threads as opposed to the others which have 8

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, jackm1120 said:

So if I grab 3 quad port nics and throw them into my gaming PCs in the LAN room connected to a 16 port switch I should be able to achieve transfer speeds of 400mb/s with each PC using SSDs in raid 0 right? I'm only worried about 1 that's using a 4690k because it only has 4 threads as opposed to the others which have 8

No - if your running them in a standard 802.3af LAG setup you would see 4 X 112MB/s file transfers being able to run simultaneously between 2 systems. This is provided the switch supports 802.3af and both clients are connected to the same switch. 

 

What Linus did if I recall correctly was bind 8 interfaces across 2 PC's to only talk to oneanother meaning while yes, they'll be able to push 400MB/s connections, they won't talk to the LAN. 

Link to comment
Share on other sites

Link to post
Share on other sites

@jackm1120 Link Aggregation or NIC Teaming or Etherchannel, what ever name people use is not designed for your desired use case. The the video you linked that showed Linus using quad port NICs to increase file transfer speeds was not using Link Aggregation or LACP. What he was using is a specific feature of SMB 3 called SMB Multichannel which is purely designed to speed up file copies between two end points.

 

Also keep in mind Link Aggregation and LACP are not the same thing. Link Aggregation is the grouping of NICs to form a single virtual interface for increased bandwidth. There are two ways to assign and control NICs in a Link Aggregation group, Statically or Dynamically (802.3ad).

  • Static: Is a quick and easy method and there is no verification of data paths, if the cable is plugged in the NIC is active in the team, get the cabling wrong and you will have a very bad time.
  • Dynamic: 802.3ad or Link Aggregation Control Protocol (LACP) does verification to make sure that for each NIC in the Link Aggregation group is plugged in to a port at the other end that is also in a LACP controlled Link Aggregation group, if not the NIC will not become active.

The original design purpose of Link Aggregation was to give increased bandwidth for one-to-many and many-to-many network connections, that being a server and many clients or switch to switch. One-to-one was not an intended use case and due to the design of the protocols etc does not increase the usable bandwidth.

 

When you create a Link Aggregation group there is what is called a hashing algorithm that is used to calculate which NIC in the group to be used for each network packet flowing. There are a number of different hashing methods that exist and not all devices support the same ones. When a network packet enters the switch it looks at the headers and calculates the hash which is used to assign the packet to an outgoing NIC in the team, this will not change if the header information does not change which is why one-to-one connections see no increase in usable bandwidth.

  • MAC Address hash: Source and Destination MAC Addresses are used to create the hash. This is the most basic method that everything will support. It has the lowest efficiency in distributing load between NICs.
  • IP Address hash: Source and Destination IP Addresses are used to create the hash: This is a better method and has much better efficiency in distributing load between NICs. Not all devices support this method, most computer NICs will but this is considered an advanced feature on switches.
  • UDP/TCP Port hash: Source and Destination UDP/TCP ports are used to create the hash. This is one of the best methods available and has one of the best efficiency in distributing load between NICs and can also give increased bandwidth between a one-to-one connection so long as different ports are used. Two different file copies would get a path each so double the throughput. This is an extremely advanced feature and few switches support this and any that do costs significant amounts, most Intel Server NICs support this.

The methods above are the standard hashing methods used in networking, there are others but they are proprietary and require special drivers or are inbuilt in to the OS such as Linux or Server 2012 R2.

 

Linux has the following methods:

Quote

Round-robin (balance-rr)

Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance.

Active-backup (active-backup)

Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.

XOR (balance-xor)

Transmit network packets based on [(source MAC address XOR'd with destination MAC address) modulo NIC slave count]. This selects the same NIC slave for each destination MAC address. This mode provides load balancing and fault tolerance.

Broadcast (broadcast)

Transmit network packets on all slave network interfaces. This mode provides fault tolerance.

IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP)

Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification.

Adaptive transmit load balancing (balance-tlb)

Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Adaptive load balancing (balance-alb)

includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.

Source: https://en.wikipedia.org/wiki/Link_aggregation#Driver_modes

 

Server 2012 R2 new Dynamic mode is functionally the same as balance-tlb.

 

Basically to sum it up:

  • Link Aggregation: Designed for servers and switches, does not increase bandwidth between individual computers.
  • SMB 3 Multichannel: Designed to speed up file copies, and only file copies, between computers. Does not work on any other kind of network traffic.

Hope this is helpful. This particular topic is actually quite complex and the more you dig in to it the more complex it gets, I have left plenty of stuff out and abbreviated some of the information so you will find exceptions to things I have mentioned etc.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

@jackm1120 Link Aggregation or NIC Teaming or Etherchannel, what ever name people use is not designed for your desired use case. The the video you linked that showed Linus using quad port NICs to increase file transfer speeds was not using Link Aggregation or LACP. What he was using is a specific feature of SMB 3 called SMB Multichannel which is purely designed to speed up file copies between two end points.

 

Also keep in mind Link Aggregation and LACP are not the same thing. Link Aggregation is the grouping of NICs to form a single virtual interface for increased bandwidth. There are two ways to assign and control NICs in a Link Aggregation group, Statically or Dynamically (802.3ad).

  • Static: Is a quick and easy method and there is no verification of data paths, if the cable is plugged in the NIC is active in the team, get the cabling wrong and you will have a very bad time.
  • Dynamic: 802.3ad or Link Aggregation Control Protocol (LACP) does verification to make sure that for each NIC in the Link Aggregation group is plugged in to a port at the other end that is also in a LACP controlled Link Aggregation group, if not the NIC will not become active.

The original design purpose of Link Aggregation was to give increased bandwidth for one-to-many and many-to-many network connections, that being a server and many clients or switch to switch. One-to-one was not an intended use case and due to the design of the protocols etc does not increase the usable bandwidth.

 

When you create a Link Aggregation group there is what is called a hashing algorithm that is used to calculate which NIC in the group to be used for each network packet flowing. There are a number of different hashing methods that exist and not all devices support the same ones. When a network packet enters the switch it looks at the headers and calculates the hash which is used to assign the packet to an outgoing NIC in the team, this will not change if the header information does not change which is why one-to-one connections see no increase in usable bandwidth.

  • MAC Address hash: Source and Destination MAC Addresses are used to create the hash. This is the most basic method that everything will support. It has the lowest efficiency in distributing load between NICs.
  • IP Address hash: Source and Destination IP Addresses are used to create the hash: This is a better method and has much better efficiency in distributing load between NICs. Not all devices support this method, most computer NICs will but this is considered an advanced feature on switches.
  • UDP/TCP Port hash: Source and Destination UDP/TCP ports are used to create the hash. This is one of the best methods available and has one of the best efficiency in distributing load between NICs and can also give increased bandwidth between a one-to-one connection so long as different ports are used. Two different file copies would get a path each so double the throughput. This is an extremely advanced feature and few switches support this and any that do costs significant amounts, most Intel Server NICs support this.

The methods above are the standard hashing methods used in networking, there are others but they are proprietary and require special drivers or are inbuilt in to the OS such as Linux or Server 2012 R2.

 

Linux has the following methods:

Source: https://en.wikipedia.org/wiki/Link_aggregation#Driver_modes

 

Server 2012 R2 new Dynamic mode is functionally the same as balance-tlb.

 

Basically to sum it up:

  • Link Aggregation: Designed for servers and switches, does not increase bandwidth between individual computers.
  • SMB 3 Multichannel: Designed to speed up file copies, and only file copies, between computers. Does not work on any other kind of network traffic.

Hope this is helpful. This particular topic is actually quite complex and the more you dig in to it the more complex it gets, I have left plenty of stuff out and abbreviated some of the information so you will find exceptions to things I have mentioned etc.

Thanks that was really helpful. I just grabbed 2 dual port intel pro 1000 nics off eBay for 20$. I'm going to start there despite the proposed end results not being favorable. I'll report back with any findings and maybe go for the quad ports a bit later if things go well.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, jackm1120 said:

Thanks that was really helpful. I just grabbed 2 dual port intel pro 1000 nics off eBay for 20$. I'm going to start there despite the proposed end results not being favorable. I'll report back with any findings and maybe go for the quad ports a bit later if things go well.

Also I forgot, I hope you are not using Windows 10 as it does not support NIC teaming at all. Previous versions of Windows desktop operating systems supported NIC teaming through drivers but Microsoft changed the whole networking stack in Windows 8/8.1/10 so only native Windows NIC teaming is supported which is not working in Windows 10.

 

You can team NICs in Windows 8.1 using the correct powershell commands, the same commands exist in Windows 10 but it does not actually work. In Windows 7 you can just use the NIC teaming utility that comes with the drivers.

 

If you are using Windows 10 give it a try and see if it is fixed, I've heard nothing new on the issue other than Microsoft is working on it but who knows they could have fixed it in the last few months since last I looked in to it. Interestingly Server 2016 which is basically the same thing NIC teaming works.

 

Edit: Also if you try it under Windows 7 and directly connect the two computers I believe from memory Intel's teaming software actually does support a method that gives you the full bandwidth of every NIC in the team. I know HP's teaming utility does which is basically a re-branded Intel one. Of course this only helps you for Windows 7 and nothing newer.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Here's an update, I grabbed 2 HP NC360T Dual port nics and plugged em into a simple 8 port tp link switch hooked up to my router. My file transfers run at a solid 200-220mb/s over LAN. Simply plug a play. I did no configuration whatsoever.

Link to comment
Share on other sites

Link to post
Share on other sites

On April 12, 2016 at 11:11 PM, jackm1120 said:

Here's an update, I grabbed 2 HP NC360T Dual port nics and plugged em into a simple 8 port tp link switch hooked up to my router. My file transfers run at a solid 200-220mb/s over LAN. Simply plug a play. I did no configuration whatsoever.

200 megabits per second or mega BYTES per second? 

 

 

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Sunshine1868 said:

200 megabits per second or mega BYTES per second? 

 

 

bytes

Link to comment
Share on other sites

Link to post
Share on other sites

@jackm1120 I'm interested to see your config on this; maybe a few screenshots of performance figures as well as configs would be helpful for people trying to achieve the same thing.

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×