Jump to content

Our CHEAPEST & FASTEST Network Speed Yet!

@honna1612 massive thanks for your original post, I was actually looking into GbE Link Aggregation when someone linked me the LTT video and looks like this is going to have a big impact for me so thanks again!

13 hours ago, honna1612 said:

I got exactly 10 Gbit at first. It was a cabling problem. Passive copper QSFP+ only gives about 10Gbit. So I searched for QSFP+ fibre and found

Since then I've done heaps of reading into the system and I don't think ^ is accurate, from all reports there is no reason why you shouldn't be able to get the full 32Gbps over passive copper (or even the 80Gbps on the 100GT/s cards). If anyone has links to show me otherwise I would be greatly appreciative.
 

13 hours ago, honna1612 said:

Single File Transfers will never go faster than 10Gbit :( But many copies at once will saturate the link and you get 3.2 Gbyte/s write/read. To fully test the link i used lanbench or multiple file copies at once.

Why won't a single file transfer go faster than 10Gbps? Is this relating to a particular bottleneck somewhere else in the system?
 

13 hours ago, honna1612 said:

with this i could achieve the maximum data rate PCIe 2.0 x8 can handle which is 24 real Gbits.

Correction on ^      PCIe 2.0 x8 == 8 lanes @ 500MT/s == 40GT/s == 32Gbps == 4GB/s

I don't know if the QDR technology is capable of hitting the 40Gbps mark if it were able to use more PCIe lanes or if the specs only meant for 40GT/s through the PCIe interface which is where the "40Gb" moniker came from, again any links would be appreciated.
 

13 hours ago, honna1612 said:

You dont need Windows Server at all. Mellanox works on fresh installed windows 10 without drivers.

@honna1612 what do you mean by that?

Link to comment
Share on other sites

Link to post
Share on other sites

10Gbit: My first cable was rated for 10Gbit. The speed was not faster than about 12Gbit. Passive copper 40Gbit is only about 2 Meter long. I didnt find any longer but it could exist. I went with 10 meter 40GBit optic fibre for 50 Euros and never had problems since.

For single files the speed wouldnt go faster than 1020 Mbyte/s on my system. CPU consumption is 12% there is your bottleneck. (Maybe settings in device manager could boost this a little)

You information is wrong: I called Mellanox because I wanted to know how fast their 40Gbit card really is the answer was:

PCI_LANES(8)*PCI_SPEED(5)*PCI_ENCODING(0.8)*PCI_HEADERS(128/152)*PCI_FLOW_CONT(0.95) = 25.6 Gbit that is the usable speed

Mellanox cards  work on Windows 10 out of the box. No extra driver needed. Windows Server uses Infiniband directly and not IPoIB (IP over Infiniband) which means that windows server uses RDMA to copy files. That in turn means that windows server can copy single files with 24 Gbit/s contrary to the IP-Stack Bottleneck on windows 10 which limits single file transfers to 10Gbit.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, honna1612 said:

I called Mellanox because I wanted to know how fast their 40Gbit card really is the answer was:

PCI_LANES(8)*PCI_SPEED(5)*PCI_ENCODING(0.8)*PCI_HEADERS(128/152)*PCI_FLOW_CONT(0.95) = 25.6 Gbit that is the usable speed

Good work getting those specs from Mellanox, I was just going off the PCI-E standards, I often find it incredibly difficult tracking down the actual throughput a card is capable of. These days they don't even include the Bus Lane diagram in mobo manuals. There are only two reasons I ever look at a mobo manual, Lane diagrams to check for potential bottlenecks and the pin layout for case buttons and LEDs, they're not always printed on the board. If your selling a card rated at "40GT/s" then you should probably say in your specs that there is a bottleneck with your card limiting it to 25.6Gbps.

 

Why manufacturers don't put these specs directly in the "tech specs" part of a product description is beyond me, that's why people look at those documents in the first place.

I did see someone selling a 7m QSFP+ passive copper cable where they stated it would run at 40Gbps but I don't entirely trust it, if anyone has used long ( 3m+) QSFP+ passive copper cables rated at 40Gbps then please let us know your experience.

 

@honna1612 What were you using as read and write media on either side of your connection? Obviously with the LTT video they used RAM disks to mitigate secondary storage read/write limitations. You are definitely right though, if you can get fiber at a reasonable price it can be a much better option, longer lengths, lighter, more flexible and not susceptible to EMI, unfortunately it seems prices in my area are much dearer.

Regardless of the bottlenecks it's still incredibly fast and an amazing technology that we can practically use at home for high end applications, I just wish they gave us all the info upfront.

Link to comment
Share on other sites

Link to post
Share on other sites

I also used a ramdisk. If you have the 960 pro M.2 drive on both sides you would also see benefits from from Infiniband since 1.5 Gbyte/s read would be bottlenecked by a 10 GBit NIC.

Infiniband switches are also crazy cheap:

http://www.ebay.at/itm/489184-B21-519134-001-HP-BLC-4X-QDR-Infiniband-Switch-Includes-VAT-and-warranty-/132080425922?hash=item1ec09b5bc2:g:ImoAAOSw44BYj1EM

16 Port 40 Gbit for 250 Dollars. Not even close for ethernet stuff.

Yeah thats true for all marketing :/

But you can also get used ConncetX-3 cards on ebay now which use PCIe3. These should be able to handle real 48 Gbit: (6 Gbyte/s)

http://www.ebay.at/itm/IBM-Mellanox-ConnectX-3-FDR-VPI-IB-E-Adapter-56Gbps-PCI-E-00D9552-/371432460884?hash=item567b199254:g:r3sAAOSwgQ9V7UjR

 

I wondered why Linus even bothered with the ConnectX-2 stuff when ConnectX-3 is also available for relatively low cost compared to ethernet.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 months later...
On 2/16/2017 at 5:03 PM, nicklmg said:

Buy network cards on Amazon: http://geni.us/KG3N

 

Get a lightning fast local network speed for less than $100!

 

 

I just bought a server with 256gb of ram and I would like to setup a 200gb ram drive that I can access from my computer at 40gbps is this possible?

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×