Jump to content

Ultimate 40Gb Ethernet for ULTRA fast File Sharing

As I regulary watch LinusTechtips on youtube I stumbled across a video called: 10x Your Network Speed.. On a Budget!

 

I thought that is still too slow. So I googled about alternate technology and found that infiniband can also do normal IP networking.

So I bought two Infiniband adapters with 40Gbit network speed each and a 7 Meter QSFP cable. All in all less than 100 Dollars.

something like this:

http://www.ebay.at/itm/Mellanox-ConnectX-2-VPI-Network-Adaptor-PCIe-Server-Card-/331765696568

 

I really find the technology fascinating as normal 40GbE Adapters are VERY expensive and I never heard of infiniband before. I bought two cards and they show up with 32Gbit. File sharing speed is at about 5 Gbytes/s (not Gigabit)! I cant find anything wrong with this solution. It works, is cheap and behaves like any other NIC (because of IP over Infiniband)

So normal SMB and other network stuff works as usual.

 

So my question is if any of you has also tried this and if someone on this channel could take a look it this to point out the possible drawbacks. (Maybe Youtube? :D)

Infiniband.jpeg

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, wrathoftheturkey said:

Dafuq you tryna do?

The newest 960 Pro SSDs read at about 3.2 Gigabytes a second. This way I can copy data as fast as it would be locally. Another point would be RAMDISK sharing over network.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, wrathoftheturkey said:

Yes, but sharing and copying what? The entire internet?

For the small storage capabilities of 960 Pro ssds, 40Gb networking is basically useless. It's like building a huge pipeline to move a few gallons of water.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

Wow guys think outside of the box for a moment. TODAY: if you have a single SSD network transfer is limited by a 10GbE card. Same thing goes for a raid with more than 1,2Gigbytes/s read (about 8 drives raid 0). This is a solution that costs as much as 10GbE but is significantly faster. Please dont spread your 640k mentality.

Link to comment
Share on other sites

Link to post
Share on other sites

Well great for single PC-to-PC transferring but if you need more than that you hit all the standard problems with infiniband. You need infiniband switches and an infiniband to ethernet gateway which can be as simple as a linux server with an infiniband adapter and an ethernet adapter or a infiniband switch that can do this.

 

This still doesn't address the practical use problem. Other than copying files between computers nothing will be able to use this bandwidth for your general computer usage. Which brings up the question why are you coping data between computers often enough to make this viable enough justification, just do what I do and leave it on the server.

 

My entire steam library is on my server on an array of 6 512GB 840/850 Pro SSDs using a mounted VHDX hosted on a SMB3 Multichannel share using X540-T1 10Gb NICs. At no point do I get anywhere near 10Gb bandwidth usage during any usage other than running a disk benchmark or copying from RAM disk. However with my setup I am not confined to just two single computers, my whole network benefits from my server having 10Gb.

 

Sure these are cheap on ebay but so are 10Gb ethernet adapters and are more flexible.

Link to comment
Share on other sites

Link to post
Share on other sites

I thought this forum was for enthusiasts and not for those who ask who needs this. But those who ask whats the limit and how do i break it!

Defending older standards wont bring anyone further. We have been stuck with 1GbE for more than a decade and 10GbE is still very expensive.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, honna1612 said:

I thought this forum was for enthusiasts and not for those who ask who needs this. But those who ask whats the limit and how do i break it!

Defending older standards wont bring anyone further. We have been stuck with 1GbE for more than a decade and 10GbE is still very expensive.

Yes but we don't all buy X99/C612 motherboards and 22 core Intel Xeons for the same reason, there is no practical use for it as a gamer or home user.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

There is no practical use for it as a gamer or home user.

 

Just because YOU dont have a use for your home does not mean nobody else has it. Take the newest movies in 4k for example. The filesize is about 30Gbyte even with HEVC. Quantum Break or Gears of Wars need 80 Gbyte+. Now transfering that much data from A to B is way to slow with 1Gbit. 10 Gbit is too expensive with RJ45 connectors and moving an external drive from A to B is impractical.

 

Seems the cheap way to go here is Thunderbolt or Infiniband. Both support 40Gbit file transfer from A to B for under 100 Dollar.

Infiniband switches are cheaper than 10GBase-T too.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, honna1612 said:

 

Just because YOU dont have a use for your home does not mean nobody else has it. Take the newest movies in 4k for example. The filesize is about 30Gbyte even with HEVC. Quantum Break or Gears of Wars need 80 Gbyte+. Now transfering that much data from A to B is way to slow with 1Gbit. 10 Gbit is too expensive with RJ45 connectors and moving an external drive from A to B is impractical.

 

Seems the cheap way to go here is Thunderbolt or Infiniband. Both support 40Gbit file transfer from A to B for under 100 Dollar.

Infiniband switches are cheaper than 10GBase-T too.

Again why are you coping so much data around, most people have a single high end gaming computer so where are they going to use the 10GbE or 40Gb IB? Leave the data on the server and stream it, 4K doesn't use anywhere near 1Gbps and loading games uses even less.

 

What is your A and what is your B and why are you copying the entire data between them, there are better and more efficient ways to do this without buying any extra hardware and adding complexity.

 

Sure this is going to sound rather elitist but I would actually be one of the few people on this forum that could make use of such bandwidth at home and that has nothing to do with large games or movies. I have 4 dual socket 1366 servers and 1 IBM x3500 M4 that I use to run an extensive home lab and I mostly use all SSD storage.

 

Just because you copy tons of data around, frankly I have no idea why, doesn't mean everyone else does the same or wants to.

 

Edit: Also I'm not saying don't do it or you shouldn't have done it. If you have done it and find it useful great. What I am saying is this is not all positives with no down sides or that everyone could use it. There are also very cheap used 10GbE NICs on ebay for the same prices and doesn't come with some, not all, of the down sides of Infiniband. 10GBase-T is not the only 10GbE offering, you can use 10GbE SFP+ and is almost exactly what you are doing but you can get cheap used switches with SFP+ ports for the devices that need/can use it and 1Gb ports for the rest cheaply.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 5 weeks later...

Considering this for a video, but need the drivers as our units didn't come with any for Windows. Anyone got them? PM ME! :D 

 

EDIT: Model number is QLE7340

widget.png?style=banner2

PC: 13900K, 32GB Trident Z5, AORUS 7900 XTX, 2TB SN850X, 1TB MP600, Win 11

NAS: Xeon W-2195, 64GB ECC, 180TB Storage, 1660 Ti, TrueNAS Scale

Link to comment
Share on other sites

Link to post
Share on other sites

 

On 10/11/2016 at 0:52 AM, honna1612 said:

Now transfering that much data from A to B is way to slow with 1Gbit. 10 Gbit is too expensive with RJ45 connectors and moving an external drive from A to B is impractical.

 

We have the new 802.3bz standard which makes it possible to have 2.5 gbps with regular cat5e cables and 5gbps with regular cat6 cables, at much cheaper overall cost compared to 10gbps.

 

This would make it possible to transfer 30 GB movies within a minute  ( 30 x 8  / 5 = ~ 48 seconds) if the media can keep up with the 5gbps writes (~600 MB/s)

 

It's still new standard, no cards or switches yet as far as I'm aware.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 months later...

Which drivers did you use? In the video Linus said that they could not run these cards in Windows due to driver problems and opted for Mellanox MHQH19b-xtr which is more expensive at least at the prices I found them...

Link to comment
Share on other sites

Link to post
Share on other sites

People going out and rushing to buy this clearly haven't got a clue, unless you are planning to run a permanent RAM disk for your transfers....? which is fucking ludicrous as no one would have a ram size large enough... or you will need to have multiple NVMe based SSD's to even break the 2Gbit/s sustained sequential throughput barrier.

 

I have loads of this shit lying around in the DC, Infiniband is fine for a certain use case cross over like the video suggests but it needs more £££/$$$ if you wish to expand on it.  Infiniband switching and routing is not cheap, you can use a Linux/BSD box to be a routing bridge between ethernet <> infiniband to save costs on the router front if you know what you are doing.

 

I would advise people to look at 10GBe instead as it can at least be expanded on afterwards without the upwards costs of infiniband switching.  Something that is clearly ignored in the information provided.  Also to do 10GBe cross over would be around half the price using second hand parts such as the Mellanox Connect-X3 and appropriate cable

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

WOW this post actually became a video: what an honor!

 

I didnt buy the adapter I quoted in the original post (I just quick searched another one so that global users can find one quickly) I actually bought two of those:

http://www.ebay.at/itm/Mellanox-ConnectX-2-VPI-Network-Adaptor-PCIe-Server-Card-/331765696568

(ConnectX2 MHQH19B-XTR)

 

I used use WINOF drivers and start opensm.exe (In the installer dir) on one machine then no IP configuration is needed because windows 10 will recognize fastest link after 2 min


I got exactly 10 Gbit at first. It was a cabling problem. Passive copper QSFP+ only gives about 10Gbit. So I searched for QSFP+ fibre and found

http://www.ebay.com/itm/Finisar-FCBN414QB1C10-QSFP-TO-QSFP-InfiniBand-Optic-Network-Cable-WUJ02R4-10m-/112231143635?hash=item1a217f58d3:g:Z5oAAOSwZJBX-iGF


After that I still got exactly 16 Gbit ramdisk to ramdisk. Still to slow for me.

Installing WINOF (driver of mellanox)

http://www.mellanox.com/downloads/WinOF/MLNX_VPI_WinOF-5_35_All_win2016_x64.exe

gives a special tab in device manager properties which says: Speed up for single port.

Thats was it.


Single File Transfers will never go faster than 10Gbit :( But many copies at once will saturate the link and you get 3.2 Gbyte/s write/read. To fully test the link i used lanbench or multiple file copies at once.

http://www.zachsaw.com/?pg=lanbench_tcp_network_benchmark

with this i could achieve the maximum data rate PCIe 2.0 x8 can handle which is 24 real Gbits.


Conclusion: Still below 100 Dollars. 3.2 Gbyte/s read/write. Cables did cost more than the adapters.

You dont need Windows Server at all. Mellanox works on fresh installed windows 10 without drivers

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, honna1612 said:

I got exactly 10 Gbit at first. It was a cabling problem. Passive copper QSFP+ only gives about 10Gbit. So I searched for QSFP+ fibre and found

We have QSFP+ DAC cables operating just fine at 40Gb but these are certified and official cables for the devices in use. I guess QSFP+ on the used market is all first generation stuff that has a few compatibility and support issues.

 

Also that Mellanox ConnectX-2 is wayy better than the QLogic originally mentioned :)

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

I'm building a game hosting company with some friends. This technology would be very helpful to reduce server costs. Having a central high speed storage array instead of needing large drive arrays on the individual machines themselves. Or if the smaller array's on the game servers prove fast enough, these cards could still provide a high speed backbone to do quick server backups and snapshots.

Not to mention creating virtual machine clusters. If the virtual machine drives can all be run from a high speed central storage, having multiple machine in the cluster would be fantastic for failover and server migration.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, StudMuffin said:

I'm building a game hosting company with some friends. This technology would be very helpful to reduce server costs. Having a central high speed storage array instead of needing large drive arrays on the individual machines themselves. Or if the smaller array's on the game servers prove fast enough, these cards could still provide a high speed backbone to do quick server backups and snapshots.

Not to mention creating virtual machine clusters. If the virtual machine drives can all be run from a high speed central storage, having multiple machine in the cluster would be fantastic for failover and server migration.

You wouldn't need 40Gb for that and this type of setup isn't stable enough to bet your business on. Use 10Gbe, you can multi path for more bandwidth but to be frank you don't need more. We run well over 1000 VMs on just two 10Gb paths, high end VMs too.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

You wouldn't need 40Gb for that and this type of setup isn't stable enough to bet your business on. Use 10Gbe, you can multi path for more bandwidth but to be frank you don't need more. We run well over 1000 VMs on just two 10Gb paths, high end VMs too.

Same, we have multiple virtualised environments with 10Gb backbone infrastructure running in excess of 3,000 VMs and even with all those it never gets anywhere near 10Gbit/s.

 

17 hours ago, StudMuffin said:

I'm building a game hosting company with some friends. This technology would be very helpful to reduce server costs. Having a central high speed storage array instead of needing large drive arrays on the individual machines themselves. Or if the smaller array's on the game servers prove fast enough, these cards could still provide a high speed backbone to do quick server backups and snapshots.

Not to mention creating virtual machine clusters. If the virtual machine drives can all be run from a high speed central storage, having multiple machine in the cluster would be fantastic for failover and server migration.

40Gbit infiniband is not cheap at all, in this silly configuration on an x-over cable that's between 2 individual nodes as the cards and cables are cheap second hand.  However start to add infiniband switching, infiniband router and/or custom Infiniband to Ethernet router and the licensing for each switch port and you have suddenly dumped major money on licensing alone for 40Gbit+ ports.

 

This topic gives people the wrong idea, in fact I urge someone to go ahead and buy a second hand infiniband switches that have no licensed ports and try to buy licensing for it from the manufacturer while its out of warranty.  You will suddenly realise why infiniband switches also pop up on eBay on the cheap and look like an amazing purchase, unless someone has left the port licenses in place, you are out of luck.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

On 1.3.2017 at 6:08 PM, StudMuffin said:

I'm building a game hosting company with some friends. This technology would be very helpful to reduce server costs. Having a central high speed storage array instead of needing large drive arrays on the individual machines themselves. Or if the smaller array's on the game servers prove fast enough, these cards could still provide a high speed backbone to do quick server backups and snapshots.

Not to mention creating virtual machine clusters. If the virtual machine drives can all be run from a high speed central storage, having multiple machine in the cluster would be fantastic for failover and server migration.

Dont ever listen to the people who always say no (the people above this post) the same guys insisted nobody needs 40Gbit at all and yet LTT made a video about it.

40Gbit looks cheap at first. The expensive part is the cabling. Going above 3 meters means buying fibre optics for 40Gbit. I managed to get a single cable of 10m for 70 dollars which is very cheap. You would have to pay 150 dollars (ebay price not new) per 10m for QSFP 40G rated cables.

 

The good answer for you is that some Mellanox cards are Infiniband and Ethernet NICs in the same port. So you can just plug in a good old QSFP to RJ45 10G transreciever module and connect to existing network gear. If this particular link needs to be faster just plug in QSFP fibre optics and you get the full 40Gbit.

 

I would suggest buying ConnectX-3 which is really 40Gbit (not 24 like ConnectX-2) and you pay around the same price as a 10Gbit Ethernet NIC and also get a 10Gbit NIC inside the Infiniband one.

 

I personally do not install games local anymore I have my NAS which is 12TB and can do 700MB/s. All my games are installed on the NAS and can be accessed from any PC. With ConnectX-2 i get the full 700MB/s on every machine and all storage is centralized. (Which is really a very good solution)

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/3/2017 at 1:36 PM, honna1612 said:

So you can just plug in a good old QSFP to RJ45 10G transreciever module and connect to existing network gear.

You can use SFP+ and QSFP+ direct attached copper cables with Ethernet as well, you don't have to use RJ45. There is some really good and cheap switches on ebay that have 2/4 SFP+ ports and 24/48 1Gb ports. This way you can have 2/4 devices connecting at 10Gb and other devices at 1Gb but collectively the other devices can make use of the 10Gb if they are accessing a NAS/Server using the 10Gb port.

 

Downside to RJ45 10Gb is that is has higher latency than SFP+/QSFP+ DAC and uses more power.

 

This is a 10Gb Direct Attached Copper (DAC) Ethernet cable.

Twinax-Cable.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, honna1612 said:

Dont ever listen to the people who always say no (the people above this post) the same guys insisted nobody needs 40Gbit at all and yet LTT made a video about it.

40Gbit looks cheap at first. The expensive part is the cabling. Going above 3 meters means buying fibre optics for 40Gbit. I managed to get a single cable of 10m for 70 dollars which is very cheap. You would have to pay 150 dollars (ebay price not new) per 10m for QSFP 40G rated cables.

 

The good answer for you is that some Mellanox cards are Infiniband and Ethernet NICs in the same port. So you can just plug in a good old QSFP to RJ45 10G transreciever module and connect to existing network gear. If this particular link needs to be faster just plug in QSFP fibre optics and you get the full 40Gbit.

 

I would suggest buying ConnectX-3 which is really 40Gbit (not 24 like ConnectX-2) and you pay around the same price as a 10Gbit Ethernet NIC and also get a 10Gbit NIC inside the Infiniband one.

 

I personally do not install games local anymore I have my NAS which is 12TB and can do 700MB/s. All my games are installed on the NAS and can be accessed from any PC. With ConnectX-2 i get the full 700MB/s on every machine and all storage is centralized. (Which is really a very good solution)

Tell me where I said no, I'm pointing out information that has been omitted in the original video to prevent someone wasting cash.  Please feel free to go ahead and encourage someone to go waste a load of money finding out how much port licensing and warranty contracts costs.  I'm providing information to people who go on eBay after videos such as this and buy up hardware they have no idea about.

 

I have said 10Gbe is a better option, it is as the total cost to implement is cheaper and you dont need a device to convert from infiniband to ethernet when using standard 10Gbe.  If you need a single 40Gbit x-over'esc connection between two devices then sure go ahead but as soon as you introduce infiniband switching you are going to incur costs beyond your wildest dreams unless you get a mega lucky 2nd hand purchase with all ports and infiniband subnet manager licensed and have an infiniband router to convert it back to your ethernet edge device.

 

I use infiniband equipment between our EMC XtremIO SAN interconnects, where latency and throughput makes all the difference.  It is not used for normal traffic routing, however we have toy'd with Linux/BSD based routers for infinband to Ethernet routing on the cheap.  Either way, expect to pay sums of cash if you go down the purchasing of 2nd hand infinband switching on eBay.

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 years later...

Big salute to Honna1216. Can't imagine 40gbps in 2016 and you tested it there. I saw LTT video that drove me here. Even they seem to struggled in video.
Now in 2020, luckily things are much better. PCIe 3.0 is here and 4.0 on the horizon at some point. I understand point of view of others resisting above due to technology defaulting to 10gbps for a normal user. However, it's really great to be on highest side possible. I have seen 5kbps up to 100Mbps so far in my own experience. I don't think anyone really would have wished on lower side by wish. Just the costs and technical limitations may be holding most of us back. And here we are in this thread talking about 40gbps.
Looking back, now it feels like I have spent large amount of time waiting for the file transfer to finish. Yes modern times have made data rates faster but also increasing bandwidth hungry larger and larger file sizes exists now than never before.
Just a week ago, I started copying my files to my NAS (let's ignore details behind it for now), and it showed hours of waiting and occassionaly over a day of waiting. That's when I thought, what could be possible out there that can provide fastest data rate in today's time with current technology accessible to a common home user. While checking found that QNAP provides 40GBe mallanox PCIe 3.0 x8 QSFP card. Single card itself was quite expensive. So found that exact card and bought it at 1/12th price. Got 2, one to go to the NAS and another one to go towards my laptop. Also got active QSFP mallanox 40gbps 10m fibre cable at the same affordable price. Found OWC's Mercury Helios 3S enclosure that will connect thunderbolt 3 cable on one side and host mallanox QSFP card inside it, connecting QSFP cable. Here, laptop has 4 interfaces wired on thunderbolt, so has OWC's Mercury Helios 3S and also the Mellanox's ConnectX-3 adapter. Laptop and NAS both have NVMe M.2 5gbps Seagate FireCuda 520 so I'm hoping to see between 2gbps - 2.6gbps data rate. If I will be able to achieve even 2gbps real data rate then that will be the highest data rate experience I ever had all my life. And probably waiting will be left to minimum.
Saw this thread so thought to share. Honna1216, you are not alone, I'm sure almost everyone would like to go faster possible, it's just that it wasn't that easy then than now. And off course any experience comes with it's own cost, time, money or drained energy in it's pursuit. If I saw your post in 2016, I would have also followed you even then, with any improvement I might have done to my setup.
Well done and thanks.


 

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.

×