Jump to content

Taking advantage of SMB 3.0

Hello! 

This problem is half networking, half servers, so I figured I would post it in the server section. I have a Synology DS215+ with LACP across both ports. The lag works as of my tests. It has 2x Crucial 240gb SSDs in it as to remove any botttlenecks. I have set the SMB version o the synology to 3.0. My computer is running wondows 10 pro, and I have installed a dual port  hp/intel NIC in it. Since windows 10 does not support lag, I am unable to create an LACP connection. I have the same problem as linus. I windows transfers are a steady 111 MB/s, but they will not go any higher. Is there anything I can do to fix the slow transfers? Im starting to think that lacp is the bottleneck here.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

A typical LACP aggregation doesn't give extra bandwidth to a single computer to computer transfer as the load balancing algorithm most commonly used is IP hash.If you create a hash of the IP pair of source and destination this value will always be the same so will only use a single path, no matter how many file copies and different protocols used.

Also currently no version of SAMBA supports the multichannel feature of SMB3 so breaking the team on the NAS won't help either.

What you could try is changing the load balancing method on the NAS to balance-rr and then use powershell in Windows 10 to create the NIC team, this does work far as I've read. Haven't tried it since my desktop is 10Gb and servers are too along with running ESXi so no need to.

Create the NIC team using New-NetLbfoTeam -LoadBalancingAlgorithm Dynamic -TeamingMode LACP -TeamMembers <NIC>,<NIC>

https://technet.microsoft.com/en-us/library/jj130849(v=wps.630).aspx

https://technet.microsoft.com/en-us/library/jj130847(v=wps.630).aspx

Edit: You could also try balance-alb if round robin doesn't work or has odd behavior. 

Link to comment
Share on other sites

Link to post
Share on other sites

In linus' video, he didnt do any teaming, and since SMB 3.0 can use multiple tcp data streams, it should work. Right?

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Kyle Manning said:

In linus' video, he didnt do any teaming, and since SMB 3.0 can use multiple tcp data streams, it should work. Right?

That is correct yes but Synology NAS's and pretty much every other brand use a linux/bsd based OS so use SAMBA to create the network shares, no version of SAMBA supports the multichannel feature. Linus was using Windows to host the share, both end (client/server) need to support multichannel to use it.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Kyle Manning said:

In linus' video, he didnt do any teaming, and since SMB 3.0 can use multiple tcp data streams, it should work. Right?

Just found the release notes for SAMBA for the next major release, 4.4, which will support multichannel. SAMBA 4.4 is due for release March 2016. Next step after this is waiting for Synology to put out an update which includes this, support email might prompt quicker release.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

Just found the release notes for SAMBA for the next major release, 4.4, which will support multichannel. SAMBA 4.4 is due for release March 2016. Next step after this is waiting for Synology to put out an update which includes this, support email might prompt quicker release.

Thanks! I guess I will have to wait and see if the dual core arm processor in my synology can handle multichannel lol

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

SMB 3.0 Multi-Channel is a SMB file transfer optimization only. Client and Server need to be 8 or above or server 2012 or above, I.E. Windows to Windows only. Your NICs on both ends need to support RDMA or RSS. If your NICs are teamed then you can't use SMB 3.0 Multichannel. You basically need all your NICs on both systems to be basic NICs where each connected port has it's own IP address. SMB 3.0 Multichannel can then find the addition paths between client and server using RDMA or RSS.

Here is the consideration:

SMB 3.0 - Faster SMB File Transfers, but all other network communication will max at the bandwidth of a single NIC port. Only SMB file transfers will be dynamically redundant.

LACP or other Teaming Solution - Any one client can only use the bandwidth of a single NIC port, but multiple clients can be served up the the total aggregated bandwidth on any protocol. All network communication is dynamically redundant.

So if your goal is to get faster SMB file transfers only then config your systems for SMB 3.0 Multichannel, but if you have multiple clients connecting to the same server or need higher aggregated bandwidth for other protocols or need full redundancy then use LACP or another teaming solution.

Honestly, to make SMB 3.0 work for a multi user environment the server needs crazy CPU horsepower, crazy fast storage, and as many NIC ports as there are NIC ports on all your clients combined. And again, only your SMB transfers will be accelerated and router management will be a nightmare because each system will have multiple IPs and your server will have a TON of IPs, and physically you will have so many cables to run.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Samishere said:

SMB 3.0 Multi-Channel is a SMB file transfer optimization only. Client and Server need to be 8 or above or server 2012 or above, I.E. Windows to Windows only. Your NICs on both ends need to support RDMA or RSS. If your NICs are teamed then you can't use SMB 3.0 Multichannel. You basically need all your NICs on both systems to be basic NICs where each connected port has it's own IP address. SMB 3.0 Multichannel can then find the addition paths between client and server using RDMA or RSS.

Here is the consideration:

SMB 3.0 - Faster SMB File Transfers, but all other network communication will max at the bandwidth of a single NIC port. Only SMB file transfers will be dynamically redundant.

LACP or other Teaming Solution - Any one client can only use the bandwidth of a single NIC port, but multiple clients can be served up the the total aggregated bandwidth on any protocol. All network communication is dynamically redundant.

So if your goal is to get faster SMB file transfers only then config your systems for SMB 3.0 Multichannel, but if you have multiple clients connecting to the same server or need higher aggregated bandwidth for other protocols or need full redundancy then use LACP or another teaming solution.

Honestly, to make SMB 3.0 work for a multi user environment the server needs crazy CPU horsepower, crazy fast storage, and as many NIC ports as there are NIC ports on all your clients combined. And again, only your SMB transfers will be accelerated and router management will be a nightmare because each system will have multiple IPs and your server will have a TON of IPs, and physically you will have so many cables to run.

I think it is a very fair limitation to only have 8 ports max in a link aggregation on most switches I've seen. At that point you're much better served by upgrading to the next ethernet speed. I'm sure the fact that most switches have 8 ports per ASIC has nothing to do with this limitation.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Samishere said:

-snip-

If your using nics on both client and server that support RDMA then cpu isn't going to be as big of a problem. Also the file server won't need that many nics since in large networks very few clients are actually doing large and sustained file transfers. The server will also have 10Gb nics and the clients 1Gb so that's another factor.

On our Netapp filers at work we only have 1 active and 1 passive 10Gb nic for the NAS vserver per site and this is more than enough for a few thousand active desktops doing standard day to day work. It is also possible for two 1Gb nics to bind their sessions to the same 10Gb server nic.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

If your using nics on both client and server that support RDMA then cpu isn't going to be as big of a problem. Also the file server won't need that many nics since in large networks very few clients are actually doing large and sustained file transfers. The server will also have 10Gb nics and the clients 1Gb so that's another factor.

On our Netapp filers at work we only have 1 active and 1 passive 10Gb nic for the NAS vserver per site and this is more than enough for a few thousand active desktops doing standard day to day work. It is also possible for two 1Gb nics to bind their sessions to the same 10Gb server nic.

Do you find situations where specific users need a second NIC installed and/or a second ethernet drop at their desk because 1Gb is a bottleneck to the specific work that they do? Or are all desks set up for this already?

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, brwainer said:

Do you find situations where specific users need a second NIC installed and/or a second ethernet drop at their desk because 1Gb is a bottleneck to the specific work that they do? Or are all desks set up for this already?

Desktops that require this are for the design/media computer labs, they are all 10Gb and have a dedicated network rendering server with special software etc. Not sure the name of it since I didn't set it up.

Typically the special exceptions have their own file servers also. The Netapps are used for standard shares and home drives along with having a vserver with two nodes for NFS serving our ESX fleet and a vserver with one node serving iSCSI for SQL/Exchange etc. The Netapp's are 8040 with 4 nodes and 3 aggregate types: flash pool sas, sas and sata.

Every other case all desktops are single 1Gb.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, leadeater said:

If your using nics on both client and server that support RDMA then cpu isn't going to be as big of a problem. Also the file server won't need that many nics since in large networks very few clients are actually doing large and sustained file transfers. The server will also have 10Gb nics and the clients 1Gb so that's another factor.

On our Netapp filers at work we only have 1 active and 1 passive 10Gb nic for the NAS vserver per site and this is more than enough for a few thousand active desktops doing standard day to day work. It is also possible for two 1Gb nics to bind their sessions to the same 10Gb server nic.

That's a huge assumption about workload. All you can truly assume is that the server NEEDS to be as capable of serving all the clients as possible, otherwise your infrastructure is bottle-necked and will choke. When it chokes everyone will suffer.

If you are spending the $$$ to put 10Gb ethernet on your server and switches, you might as well just put 10Gb on the clients and stick with the more universal link aggregation which provides aggregation and redundancy for all protocols, and doesn't over-complicate your networking setup with tons of superfluous IPs on your network.

Take LMG's office as an example. Their setup would be exponentially more complicated if they were trying to setup SMB 3.0 Multichannel instead of going 10Gb. Also, none of their Linux, NAS, OSX stuff would benefit.

SMB 3.0 Multichannel is cool if you have a network for yourself and your workload, but it's usefulness doesn't really scale to multi-user network concerns.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Samishere said:

That's a huge assumption about workload. All you can truly assume is that the server NEEDS to be as capable of serving all the clients as possible, otherwise your infrastructure is bottlenecked and will choke. When it chokes everyone will suffer.

But you can never do this for large multi thousand desktop computer networks, it's not physically possible or sane to try and do so. Just like your ISP doesn't have as much bandwidth as customer connections, not possible.

LMG is a tiny spec compared to the total infrastructure of a large business network. We have 36,000 students and 5,000 staff across 3 cities so I'm pretty sure we are bigger :P. FYI we also have 40Gb interconnects between top of rack server switches and multi 40Gb internet/WAN links.

Edit: Also its not an assumption, its from practical experience and is a standard industry practice.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

But you can never do this for large multi thousand desktop computer networks, it's not physically possible or sane to try and do so. Just like your ISP doesn't have as much bandwidth as customer connections, not possible.

LMG is a tiny spec compared to the total infrastructure of a large business network. We have 36,000 students and 5,000 staff across 3 cities so I'm pretty sure we are bigger :P. FYI we also have 40Gb interconnects between top of rack server switches and multi 40Gb internet/WAN links.

That was my original point that I've been saying. Building your network around SMB 3.0 Multichannel for increased SMB file transfer doesn't scale, not even to the scale of LMG.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Samishere said:

That was my original point that I've been saying. Building your network around SMB 3.0 Multichannel for increased SMB file transfer doesn't scale, not even to the scale of LMG.

Fortunately you don't 'have' to design it for that purpose and if someone comes along with a special case setup and flukes the requirements for multichannel to kick in then that's just a bonus :). We also have server to server traffic that could benefit from it if we upgraded the OS and did a slight redesign of network connectivity but not worth the effort yet.

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, leadeater said:

Edit: Also its not an assumption, its from practical experience and is a standard industry practice.

 

35 minutes ago, leadeater said:

Also the file server won't need that many nics since in large networks very few clients are actually doing large and sustained file transfers. The server will also have 10Gb nics and the clients 1Gb so that's another factor.

LOL, idk if we are in the same conversation brud. I was referring to a completely hypothetical client / server network where SMB multichannel would be in use. The workload is completely unknown, but sustained transfer of huge files is assumed because that would be the reason for using the tech at all. So if you wanted to support those types of transfers for multiple clients you would need to make sure the server didn't get choked otherwise you would lose the benefits of setting up SMB multichannel in the first place.

The larger the network, the greater the possibility of spikes in usage. So if you are building your network around a specific tech for these faster network transfers then you ding dang diddly better make sure you can support them, otherwise you lose all benefits.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Samishere said:

 

LOL, idk if we are in the same conversation brud. I was referring to a completely hypothetical client / server network where SMB multichannel would be in use. The workload is completely unknown, but sustained transfer of huge files is assumed because that would be the reason for using the tech at all. So if you wanted to support those types of transfers for multiple clients you would need to make sure the server didn't get choked otherwise you would lose the benefits of setting up SMB multichannel in the first place.

The larger the network, the greater the possibility of spikes in usage. So if you are building your network around a specific tech for these faster network transfers then you ding dang diddly better make sure you can support them, otherwise you lose all benefits.

You can set up SMB multichannel with two 1Gb links on the client and one 10Gb link on the server. Or even two 10Gb links on the server. After a few (2-4) parallel links, it really just makes sense to upgrade to the next jump in line speed over anything else.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Samishere said:

 

LOL, idk if we are in the same conversation brud. I was referring to a completely hypothetical client / server network where SMB multichannel would be in use. The workload is completely unknown, but sustained transfer of huge files is assumed because that would be the reason for using the tech at all. So if you wanted to support those types of transfers for multiple clients you would need to make sure the server didn't get choked otherwise you would lose the benefits of setting up SMB multichannel in the first place.

The larger the network, the greater the possibility of spikes in usage. So if you are building your network around a specific tech for these faster network transfers then you ding dang diddly better make sure you can support them, otherwise you lose all benefits.

Not everyone would be doing large transfers at the same time, multichannel is still useful. All you need to plan for is a higher average utilization with multichannel support than without. There is no drastic change in planning for the file server or the majority of the network, both of these are far more capable than a single client or even 10.

The only big difference is the client has more than 1 nic, that is the only fundamental difference.

Scale and planning rules change as networks get bigger. Things you would do for a small network you wouldn't do for a large one, and the same in the reverse. Smaller networks are actually harder to plan around performance/utilization than larger ones. Also content creators like LMG have much larger requirements than a standard user and you would treat them differently, which we do (refer to my reply to brwainer).

Giving servers 10Gb connectivity now days is very cheap, no significant increase on 1Gb. A 4x 10Gb file server could supply more standard clients than I could even begin to guess at and even for high usage content creators this could do in to the many hundreds.

The videos that LMG show regarding networking and storage are beyond what they need but are also for entertainment too so serve a dual purpose. What they have compared to other content creation companies of similar size is vastly different, in part due to the support and sponsoring they receive. Don't use them as a gauge for what is actually required.

Edit: Also when I'm referring to the infrastructure at my work I'm only using it as an example not a literal of this discussion. Any new setup where you would want multichannel support would realistically have 10Gb nics as part of the design and dual paths to the core if not directly in the core if it is a small setup.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, leadeater said:

Not everyone would be doing large transfers at the same time, multichannel is still useful. All you need to plan for is a higher average utilization with multichannel support than without. There is no drastic change in planning for the file server or the majority of the network, both of these are far more capable than a single client or even 10.

The only big difference is the client has more than 1 nic, that is the only fundamental difference.

Scale and planning rules change as networks get bigger. Things you would do for a small network you wouldn't do for a large one, and the same in the reverse. Smaller networks are actually harder to plan around performance/utilization than larger ones. Also content creators like LMG have much larger requirements than a standard user and you would treat them differently, which we do (refer to my reply to brwainer).

Giving servers 10Gb connectivity now days is very cheap, no significant increase on 1Gb. A 4x 10Gb file server could supply more standard clients than I could even begin to guess at and even for high usage content creators this could do in to the many hundreds.

The videos that LMG show regarding networking and storage are beyond what they need but are also for entertainment too so serve a dual purpose. What they have compared to other content creation companies of similar size is vastly different, in part due to the support and sponsoring they receive. Don't use them as a gauge for what is actually required.

Edit: Also when I'm referring to the infrastructure at my work I'm only using it as an example not a literal of this discussion. Any new setup where you would want multichannel support would realistically have 10Gb nics as part of the design and dual paths to the core if not directly in the core if it is a small setup.

Dude, SMB multichannel can't leverage multiple 1Gb ports on the client to a singe 10Gb port on a server. All the info is here: http://blogs.technet.com/b/josebda/archive/2012/05/13/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0.aspx

It works by finding multiple IP paths and then leveraging RSS or RDMA to initiate multiple channels over those IP paths. So each 10Gb port on a server would only establish one link at 1Gb for the client. So we are back to needing a boat load of ports on the server to support a somewhat decent user experience. Yes I know not every user would be utilizing it all the time at the same time, but if you didn't plan to support (and please pay attention) the best experience for as many users as is possible, then any investment you made to make this whole complicated setup work in the first place would be a waste the moment your users experienced the choke out of resources. That's all the user would care about.

Also, 10Gb NICs are becoming standard on servers at near commodity prices, but switches and cabling are still crazy expensive.

You keep bringing up where you work and such and I don't know what bearing that has here because this tech clearly wouldn't work with what your company does, unless it was for accelerating very isolated workloads of a particular unit in your organization. Hey, I'm the Network Administrator for a small company and there is no way I would commit the resources to build this tech out in our single site. If you tried to deploy something like this on an enterprise scale even though it's clearly not viable you would git wrecked.

Again, the initial point was that it's cool tech, but it can't possibly scale to be useful enough for anything beyond a speedy setup for a personal network where you only need quick SMB transfers. I'm pretty sure you agree, but you keep insisting that you could make it work, but you really can't.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Samishere said:

-snip-

Yes it can support multiple client to one sever interface, it directly states it not to mention I have tested it to make sure it can.

Quote

When SMB is deployed with SMB Multichannel, SMB creates multiple TCP/IP connections for a single session with at least one or more connections per interface if the network adapters are RSS-capable. This configuration enables SMB to use the combined network adapter bandwidth that is available and makes it possible for the SMB client to continue without interruption if a network adapter fails.

Servers have multiple connections regardless of multichannel or not, this is just a choice of teaming or no teaming. Everything else you would do for setting up networking and the server doesn't change other than this.

You say it makes things complicated but that is not my view on the matter. If all it requires is to change the way you connect the server, team vs no team, how is this so drastically complicated that it should not be considered. File servers are already massively over subscribed for networking connectivity to clients so that isn't changing either. Also there is no extra investment so costs no more to use multichannel or not.

SMB multichannel is on by default so unless you specifically disable it any newer Windows server or client is going to use it whether anyone wants to or not 

The original purpose of SMB multichannel was actually for server environments, specifically Hyper-V with VMs stored on a SMB/CSV volumes. This is not for small setups and home use but is actually useful if you go through the effort of dual nics at home.

I'm not sure there is much point discussing this further, we don't agree and that's fine. I can agree this is not for everyone and shouldn't be a major consideration for designing networks and servers but I see no reason to not utilize it if you can, but don't go crazy and dual nic every client just because you can (we both agree on this I suspect).

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 years later...

Any news about this? I would like to know if macOS can use SMB 3 multichannel and use dual nics in bonded mode to achieve dual connections to the same server session. I've tried this with a 2012 R2 HP server with 2 bonded ethernet interfaces and an old MacPro 5.1 running High Sierra bonded ethernet and I couldn't make it happen. The server is apparently supporting Multichannel smb3 as I checked it.

 

It apparently works with macOS and a single 10Gbe adapter (Thunderbolt to 10Gbe adapter or iMac pro) and up to 4 connections to the same server/session to utilise the whole 10Gbe bandwitdth. The server should prefeeably have SMB require signing set to disabled/off (LanmanServer in current control set) antivirus off and the macOS client should have "signing=off" in nsmb.conf for best performance.

 

LACP haven't supported dual connections to same server before Multichannel SMB 3, but I was hoping it would now. I can't check Multichannel status in macOS (don't know how/if possible). I now the old Intel interfaces in the MacPro 2010 supports RSS but maybe not in macOS and it doesn't matter as I want the bonded ports to support 1 connection each to the same Windows server (bonded interfaces for "ease of configuration" - don't want a multiple subnet configuration if possible, but that's what Samba 4 with experimental support for Multichannel SMB seems to need if running a NAS for example). I think Windows to Windows support dual/quad gigabit LACP trunks between Workstation/clientOS and Server for up to a 4 Gigabit "channel". Didn't Linus write about that?

I might test with a newer Mac and 2 ethernet "dongles" and possibly with different subnets instead of LACP trunks before I give up. Just a hassle to get all needed equipment together and/or configuration changes in production machines...

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×