Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Combining gigabit internet to 10 gbit internet

 Share

Go to solution Solved by cmndr,

Adding on to what others have stated - NOT WORTH IT.

At a minimum you would need a 20 port managed switch, a bunch of cabling and only sometimes will it work. Also it's compute heavy
 


What I'd actually suggest for most people -

Mikrotik CRS305 switch - $120ish (QNAP also has some decent but pricier choices)
SFP+ DAC cable - $15ish
SFP+ 10Gbe NIC - $30ish (MELLANOX CONNECTX-3 EN CX311A)
10GBASE-T SFP+ to RJ45 Transceiver - $40ish
10Gbe NIC - $70ish (might be 100ish right now due to supply chain BS)
1 CAT6 cable


This will get you in the game, it'll be hassle free, it'll just work and you won't go WTF when your server can only send 1.5Gbps at a time (with awful latency) because whatever crazy hacky solution you came up with doesn't support ROCE.

Going down a step you could just "settle" for 2.5Gbps. An RJ45 switch will be around $100ish, a NIC is around $25ish and a lot of newer motherboards have 2.5Gbps built in. 2.5Gbps is probably fine in most cases.

 

On 5/28/2022 at 1:54 AM, Hjallerrboii said:

What kind of costs are we talking? What if i could do with combining 3 or 4 gigabit connections?

You'll likely end up spending about as much to get 10 ethernet cables working over SMB multi-channel as it would to just do it the right way.
Similarish story with 4. You'd also likely get better real world speed with 2.5Gbps networking than trying to get 3-4x 1Gbe cables working together. Also better latency.

There's a reason why people usually don't do this (at most they're doing 2x1Gbe because it's "easy enough")

Also SMB multichannel only really works well if EVERYTHING is running windows. It's a huge hassle otherwise. That and I probably suck at setting up routing tables under linux.


----

 

 

One other alternative, if you just care about performance for ONE person...
ISCSI network share, then set up block level caching on your client system. (LVM cache in linux, primocache in windows). This will usually work better than trying to go gungho with faster networking.

Before and after (on a 10Gbe link)
This is admittedly throwing a BUNCH of hardware at the issue (my NAS is probably overkill and I'm using a 58GB optane stick to cache it on my client)

image.png.6c3c3b35aa6805538c162c49f6d61a03.png
 

image.png

I have 10 individual 1 gbit connections accessible but would like that to be a combined 10 gbit. Does anyone know how or if that is possible?

Link to comment
Share on other sites

Link to post
Share on other sites

depends in what way, maybe ellaborate some on what you have and what you intend to do with this.

Link to comment
Share on other sites

Link to post
Share on other sites

Is it possible?  Yes.

Is it possible without being so much work it's silly as hell?  No.

 

To make it work you need a bunch of specialized hardware and it's not cheap.  Basically:  Don't bother.

Link to comment
Share on other sites

Link to post
Share on other sites

This is called Channel Bonding, but both sides of the connection need to support it.

 

By default sender and receiver don't expect data coming in over a separete IP/physical connection to belong to the same logical connection/transfer.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, tkitch said:

Is it possible?  Yes.

Is it possible without being so much work it's silly as hell?  No.

 

To make it work you need a bunch of specialized hardware and it's not cheap.  Basically:  Don't bother.

What kind of costs are we talking? What if i could do with combining 3 or 4 gigabit connections? 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Hjallerrboii said:

What kind of costs are we talking? What if i could do with combining 3 or 4 gigabit connections? 

 

What is your use case for this?  Most traffic you'd be lucky to max out one connection.

 

If all you need is faster multi-threaded downloads then load balancing up to four should have some benefit (as downloads rarely do more than four connections at once), but uploads can't usually be multi-threaded so just randomly use one connection.

There is OpenMPTCProuter that can fully bond up to 8 WANs but that's a PITA due to it effectively replacing your router or being another potential point of failure behind it. Plus you need some sort of VPS at the other end acting as the server that would need a 10Gbit link, which is likely quite expensive.

Router:  Quotom-Q555G6-S05 (pfSense) WiFi: Zyxel NWA210AX (1.44Gbit peak at 160Mhz 2x2 MIMO, ~900Mbit at 80Mhz)

Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX
ISPs: Zen VDSL (~74Mbit) + VOXI 4G [Vodafone] (~120Mbit) + Three 5G (~500Mbit average)

Link to comment
Share on other sites

Link to post
Share on other sites

It depends on A LOT of things you have not mentioned in this post. 

Are the different connections from different ISPs? 

Do you want a single connection to be able to use more than 1Gbps?

What hardware and software do you use? 

Why do you have 10 different connections? 

Link to comment
Share on other sites

Link to post
Share on other sites

To make this work the ISP for each connection needs to support it at both ends of the connection so all the pipes can be combined. Its like channel bonding, but I forget the term because its actually a layer up from channel bonding. 

 

Load balancing simply rotates your connection through the various ISPs. Its like having 10 routers cycling through your 10 connections. You dont get 10 connections stacked like a single connection.

Link to comment
Share on other sites

Link to post
Share on other sites

I've seen several people talk about "channel bonding" in this thread and before you start looking into it, I'd like to inform you that it will NOT work, if you have multiple ISPs.

 

Even if you somehow managed to convince all your ISPs to implement a link aggregation group (LAG), you would still run into issues with routing. Each ISP has its own range of addresses that they expect you to use. They can't just start routing traffic for each other. Besides, even if they wanted to there is a very high likelihood that the routers at the other end doesn't support channel bonding between each other.

In general, you can't just channel bond between multiple hardwares that uses separate data and control planes.

There exist various "multi-chassis link aggregation" standards such as vPC for Cisco NX-OS, or MC-LAG for FortiOS, but all of these are proprietary and do not work cross-vendor.

 

LAG will work if you have the same ISP for all 10 links though, assuming the ISP is okay with it.

Link to comment
Share on other sites

Link to post
Share on other sites

Adding on to what others have stated - NOT WORTH IT.

At a minimum you would need a 20 port managed switch, a bunch of cabling and only sometimes will it work. Also it's compute heavy
 


What I'd actually suggest for most people -

Mikrotik CRS305 switch - $120ish (QNAP also has some decent but pricier choices)
SFP+ DAC cable - $15ish
SFP+ 10Gbe NIC - $30ish (MELLANOX CONNECTX-3 EN CX311A)
10GBASE-T SFP+ to RJ45 Transceiver - $40ish
10Gbe NIC - $70ish (might be 100ish right now due to supply chain BS)
1 CAT6 cable


This will get you in the game, it'll be hassle free, it'll just work and you won't go WTF when your server can only send 1.5Gbps at a time (with awful latency) because whatever crazy hacky solution you came up with doesn't support ROCE.

Going down a step you could just "settle" for 2.5Gbps. An RJ45 switch will be around $100ish, a NIC is around $25ish and a lot of newer motherboards have 2.5Gbps built in. 2.5Gbps is probably fine in most cases.

 

On 5/28/2022 at 1:54 AM, Hjallerrboii said:

What kind of costs are we talking? What if i could do with combining 3 or 4 gigabit connections?

You'll likely end up spending about as much to get 10 ethernet cables working over SMB multi-channel as it would to just do it the right way.
Similarish story with 4. You'd also likely get better real world speed with 2.5Gbps networking than trying to get 3-4x 1Gbe cables working together. Also better latency.

There's a reason why people usually don't do this (at most they're doing 2x1Gbe because it's "easy enough")

Also SMB multichannel only really works well if EVERYTHING is running windows. It's a huge hassle otherwise. That and I probably suck at setting up routing tables under linux.


----

 

 

One other alternative, if you just care about performance for ONE person...
ISCSI network share, then set up block level caching on your client system. (LVM cache in linux, primocache in windows). This will usually work better than trying to go gungho with faster networking.

Before and after (on a 10Gbe link)
This is admittedly throwing a BUNCH of hardware at the issue (my NAS is probably overkill and I'm using a 58GB optane stick to cache it on my client)

image.png.6c3c3b35aa6805538c162c49f6d61a03.png
 

image.png

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 1 TB Adata XPG Pro | 2TB Micron 1100 SSD
QN90A | Emotiva B1+, ELAC OW4.2, PB12-NSD, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, cmndr said:

Adding on to what others have stated - NOT WORTH IT.
-snip-

While what you said might apply to local connections but OP asked about combining "Internet" connections. You won't be running SMB multichannel against your ISP.

 

I agree that it would be better to just get a single 10Gbps connection if that's what OP is after for his file server, but I object to some of the things you said in your post as well.

 

 

8 hours ago, cmndr said:

At a minimum you would need a 20 port managed switch, a bunch of cabling and only sometimes will it work. Also it's compute heavy

Channel bonding will always work if you do it correctly. You almost make it sound like it's random if it works or not, which is obviously not the case.

 

 

8 hours ago, cmndr said:

What I'd actually suggest for most people -

Mikrotik CRS305 switch - $120ish (QNAP also has some decent but pricier choices)
SFP+ DAC cable - $15ish
SFP+ 10Gbe NIC - $30ish (MELLANOX CONNECTX-3 EN CX311A)
10GBASE-T SFP+ to RJ45 Transceiver - $40ish
10Gbe NIC - $70ish (might be 100ish right now due to supply chain BS)
1 CAT6 cable

I really don't understand this parts list.

  • Why are you recommending a DAC cable as well as a 10GBASE-T SFP? Why not use the same type of cabling for both connections? Also, since you mentioned latency later in your post I assume performance is a concern for you. In that case you really should stay away from 10GBASE-T and DAC cables. Optical SFPs got a lot lower latency.
  • The way you worded the SFP is confusing. Just call it what it is, a 10GBASE-T SFP+ Transceiver. No need to include the "to RJ45" part because 10GBASE-T already means it is RJ45.
  • It also feels weird that you are so specific with some parts (like the Mellanox NIC) but very vague about other parts (just saying "10Gbe NIC"). If you like the Mellanox CX311A, why not recommend buying two of those, rather than one of them and then some other NIC?

 

8 hours ago, cmndr said:

because whatever crazy hacky solution you came up with doesn't support ROCE.

Why use RoCE at all? I assume it is because you want RDMA, but that is only necessary if you want SMB Direct. SMB Multilink will work without it.

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

Why are you recommending a DAC cable as well as a 10GBASE-T SFP? Why not use the same type of cabling for both connections?

SFP+ DAC for connecting to a "server" or NAS near the network switch. RJ45 based ethernet for spanning longer distances.
Mostly because DAC is usually cheaper ($30ish network cards off ebay, $15ish short distance cables) and you can get away with only using a transceiver or two for connections that need longer distances (one warning the CRS305 gets too hot if using more than one transceiver)

For context, the switch I linked to is ONLY SFP+ (other than a 1Gbe RJ45 uplink) and costs $120. If you want a switch with proper RJ45 you're looking at $300+ for most switches these days. A QNAP QSW-308-1C could work as an alternative to the CRS-305 and a transceiver. The point is to avoid spending twice as much.
 

Quote

Also, since you mentioned latency later in your post I assume performance is a concern for you. In that case you really should stay away from 10GBASE-T and DAC cables. Optical SFPs got a lot lower latency.

10Gbps over just about any medium is "fast enough" for most consumer applications. It's also lower latency than 1Gbps. The random searches I'm doing show the word "serialization" pop up. I'm interpreting this as 10Gbe switches do work 10x as quickly (though there's still wire transfer time). Going off of memory, my round trip time dropped in half when going to 10Gbe for small data transfers. Latency is MUCH MUCH improved for workloads that previously saturated 1Gbe and created a backlog. My general take is that "1Gbe" is relatively slow and that pretty much anything on 10Gbe is "fast enough" even if not ideal. I suspect that for VERY HIGH performance applications you'd want shorter distances and faster than 10Gbe (maybe infiniband? maybe 100Gbe? this isn't my strong point)

Setting up ISCSI and caching improved typical latency more than anything though since something like 70% of hits are now coming off of a local Optane drive.

 

Quote

Channel bonding will always work if you do it correctly. You almost make it sound like it's random if it works or not, which is obviously not the case.

It's very possible I did something wrong. It wasn't plug and play for me and my hacky solution was to use two different subnets (so router to switchA and switchA plugged into my NAS and PC as well as router to switchB and switch B also plugged into my NAS and PC) instead of messing with routing tables.

Because I don't do IT stuff professionally, I'm a hobbyist. I usually assume that if I struggle (and get basically 0 useful help online) that most others will as well.

 

Quote

It also feels weird that you are so specific with some parts (like the Mellanox NIC) but very vague about other parts (just saying "10Gbe NIC"). If you like the Mellanox CX311A, why not recommend buying two of those, rather than one of them and then some other NIC?

I remembered getting that part (and it's relatively easy to get the gen2 which isn't any cheaper). I don't recall the 10Gbe rj45 NIC I used. I think it was something from aquantia.
It's trivial to have one switch near your server (or your PC) but if you need to span a longer distance (like 50' cable wise) then you can't get by on DAC cables. Fiber is its own can of worms. I don't (yet) feel comfortable using fiber for myself so I'm reluctant to recommend fiber to a random person. DAC is mostly idiot proof. This mostly comes down to cost and what I saw people on r/homelabs doing. DAC for everything in your server rack/network location and then RJ45 based ethernet for anything further away.
 

Quote

Why use RoCE at all? I assume it is because you want RDMA, but that is only necessary if you want SMB Direct. SMB Multilink will work without it.

 

I could be off. I'm a bit out of my wheelhouse.
I just know that my NAS's CPU spiked during transfers and I couldn't get "full speed". It's very possible I'm misatributing the issue. I don't really have this issue with 10Gbe though.

Also do you mean multi-channel as opposed to multi-link?

 

 

 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 1 TB Adata XPG Pro | 2TB Micron 1100 SSD
QN90A | Emotiva B1+, ELAC OW4.2, PB12-NSD, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×