Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
jakkuh_t

Transferring Files at 100 GIGABIT Per Second - HOLY $H!T

Recommended Posts

Posted · Original PosterOP

Almost a year and a half ago we checked out networking that was supposed to run at 40 gigabit, but now, we've stepped up our game into the land of triple digit gigabits.

 

 

Buy Mellanox ConnectX Cards:
On Ebay: http://geni.us/k1gzC

 


widget.png?style=banner2   -  7700K, 16GB RAM, GTX 1080 Ti FE, 2*4TB, 512GB 850 Evo


 

Link to post
Share on other sites

I'm going to watch this video when I get done paying my 70$ 20mb internet bill! Might even tick the 4k option if spectrum is feeling up to it

Link to post
Share on other sites

Sorry guys but from a technician who codes fiber optics for a manufacture daily i need to correct a few things:

 

1. QSFP+ is 40G, if it's 100G it's QSFP28. (explanation below)

2. (okay a bit picky one) you can get an SFP/QSFP/etc. converter directly to RJ45, as long as the port is run in Ethernet mode.

3. running any longer runs than 5 meters is on a budget is way less expensive than 2400$, let me explain:

  • AOC is a niece market therefore high pricing, normal installation for over 5 meters people us fiber optics.
  • for example let's say you guys planned to do 10/40/100G over fiber in the whole office, you would most likely use SR optics (short range, often rated at op too 300 meters). for QSFP28 aka. 100G they each cost 100-150 usd at a third party manufacture. and then a fiber optic cable here lets say an MTP 20M cable which is 30-60 usd depending on where you get them. so a little above 250 usd if you bought the cheapest, would recommend going to another seller than the cheapest and get better quality but that's besides the point.

 

*SFP is 1,25G and SFP+ is 10G and SFP28 is 25G. QSFP+ is VERY roughly speaking 4 SFP+ strapped together therefor creating 40G, QSFP28 is then funny enough Quad SFP28 therefore 100G.

 

btw if you guys ever need help sourcing any high end fiber optics or passive equipment for fiber networking, hit me up i might be able to give you a good deal ;) 

 

Link to post
Share on other sites
9 minutes ago, mylenberg said:

-snip-

Don't forget about QSFP-DD or OSFP :P

 

A lot of customers these days seem to be migrating away from SR and over to LR for all their runs, even the short runs, since (iirc) they don't have to worry about OM3, OM4 rated cables and the different distance limitations each one can mean for higher speeds since for some optics they will provide lower distances with OM3 vs OM4 cables at the same speed which can be annoying to deal with.


Current Build Log:

 

Original Build:

 

Link to post
Share on other sites
56 minutes ago, Alex Atkin UK said:

I love it when new technology like this comes out, because prices of 10Gigabit server pulls on eBay should start to come down. ;)

You'd likely be surprised to learn this actually isn't.... new. Like 10GbE and IB, it's been around a while. It's used primarily in computing clusters, though. Right now they're looking at TbE. Yes, 1TB/s Ethernet.


Wife's build: Amethyst - Intel i7-5820k, 16GB EVGA DDR4, ASUS X99-PRO/USB 3.1, EVGA GTX 1080 SC, Corsair Obsidian 750D, Corsair RM1000

My build: Mira - Intel i7-5820k, 16GB EVGA DDR4, ASUS Sabertooth X99, EVGA GTX 1070 SC Black Edition, NZXT H440, EVGA Supernova 1050 GS

Link to post
Share on other sites

I've setup the 100G over Fabric Showcase on thursday for the Amsterdam OCP event next week.

Yes that's crazy (from a normal standpoint) stuff.

It's running a AIC FB127-LX with 3x100G + MZ4LB3T8HALS-00003 (Samsing 3.8TB M.3 NVMe SSDs), a Mellanox MSN2700-CS2F (32x100G) Switch and 4 Compute nods with 100G each.

But Linus, for your "Office use" you should only should get connectx-3 with a MSX1012B-2BRS Switch, it's more cost effective at the moment.

For conclusion some Hardware p*rn:

5bafddd28c93c_e8-100GBEoverFabric-050.thumb.jpg.786246ac736a8de47831627c6147825d.jpg

Link to post
Share on other sites

As someone well versed in this tech sector, all I can stay is god damn you guys are dumb.

 

You guys went with Mellanox, a place with some seriously shitty driver, cable and support. You should see how many CRCs we get on Mellanox switches and cards at work. The jitter on those pieces of shit are insane. Should have gone for a solarflare NIC for that kind of thing.

 

Second, running this on Windows? Bitch please, that kind of latency on their shitty tech stack is ridiculously bad. You should be using Linux and onload for that kind of thing.

 

Then no SPDK or NVMEoF? That is the future, not this infiniband and other crap. Almost as low latency as native storage with that tech. That kind of thing is really new though, but is getting mainstreamed in the Linux kernel.

 

There is a reason why so many algorithmic, financial, CDNs, clouds and OEMs use solarflare NICs.

Link to post
Share on other sites

Great vid..and it once again bringing up my curiosity about HPC(high performance computing)

Can I suggest a video about build a home super computer by connecting some identical or different computers to form one single super computer with 100Gbps connections.

I heard may be this can be done using Windows HPC Server..and we may have something like a 256 or 512 cores system and try it out with some multi-threaded applications may be?

 

Don't know if it's doable but surely want to see something like this!

Link to post
Share on other sites
6 minutes ago, kedstar99 said:

There is a reason why so many algorithmic, financial, CDNs, clouds and OEMs use solarflare NICs.

But the CDN and finacial customers in my dc my run solarflare inside their servers, but in the core they still use Juniper MX/QFX & Cienna DMDM Systems to uplink to the DE-CIX.

Most financial customers and even fintec startups still rely on 10G or even 1GbE and their uplink didn't see 1G yet, even if the carrier could deliver...

Link to post
Share on other sites

Consumers have really been forgotten by the networking hardware companies. 1GbE should have been replaced 10 years ago by 10GbE, and in the 2018-2019 time frame, we really should be at the 50+ gigabit range for home networks.

 

1GbE is simply preposterously slow to be using in 2018, it is like trying to sell the Nvidia Geforce 256 to modern gamers today.

Link to post
Share on other sites
30 minutes ago, Razor512 said:

Consumers have really been forgotten by the networking hardware companies. 1GbE should have been replaced 10 years ago by 10GbE, and in the 2018-2019 time frame, we really should be at the 50+ gigabit range for home networks.

 

1GbE is simply preposterously slow to be using in 2018, it is like trying to sell the Nvidia Geforce 256 to modern gamers today.

There isn't consumer demand for it. That's why we don't have anything faster than 1GbE in most home network setups. It just isn't needed. It'd be a different story if home Internet access bandwidth was in excess of 1Gb everywhere, but that isn't the case. And having multiple computers connected to an Internet connection is generally the only reason most home networks exist. And most of that is wireless.


Wife's build: Amethyst - Intel i7-5820k, 16GB EVGA DDR4, ASUS X99-PRO/USB 3.1, EVGA GTX 1080 SC, Corsair Obsidian 750D, Corsair RM1000

My build: Mira - Intel i7-5820k, 16GB EVGA DDR4, ASUS Sabertooth X99, EVGA GTX 1070 SC Black Edition, NZXT H440, EVGA Supernova 1050 GS

Link to post
Share on other sites
50 minutes ago, Razor512 said:

Consumers have really been forgotten by the networking hardware companies. 1GbE should have been replaced 10 years ago by 10GbE, and in the 2018-2019 time frame, we really should be at the 50+ gigabit range for home networks.

 

1GbE is simply preposterously slow to be using in 2018, it is like trying to sell the Nvidia Geforce 256 to modern gamers today.

Its nothing new though, motherboard NICs were 100Mbit for WAY longer than they should have and even in Gigabit land they still slap crappy Realtek chipsets on the majority of motherboards stealing nearly 100Mbit off your peak speeds anyway.

That said, I still think its a minority of home users who need Gigabit let alone 10Gig.  Just read the forum here, most people still using WiFi and plenty still using 2.4Ghz.

Link to post
Share on other sites

Those limitations stifle innovation, imagine if motherboard and router makers started giving at least 10GbE, or probably 5 10GbE ports and 1 40GbE port for a home server, we would see rapid innovation in technologies targeting consumers.

 

For example, if ISPs restricted consumers to 56k dialup, we would have never had netflix.

 

If home networks were limited to 3Mbps, we would likely have never seen network attached storage devices being made and sold to consumers.

 

R&D typically does not take place when everything lacks the ability to enjoy the fruits of that R&D.

Link to post
Share on other sites

I don't think it typically works like that, because R&D is thanks to the insane profit margin of high-end kit.  It only comes down to us once the people willing to spend insane-o-money have moved on to something better.

 

Although I do see 10Gig cards on eBay for relatively cheap right now, so its probably the time to roll it out, especially now we have the cores to handle it on desktop parts.

Some Threadripper boards do in fact have 10Gig as those are the consumers likely to be able to take advantage of it.

Link to post
Share on other sites
3 hours ago, JoostinOnline said:

The YouTube comments are just people who think that more lanes will make it go faster. xD

That's what flame stickers are for.


VashTheStampede 4.0:

CPU: AMD Threadripper 1950x | CPU Cooling: EKWB S280 with the EK Supremacy sTR4 RGB Nickel Water Block and Scarlet Red Premix | Compound: Thermal Grizzly Kryonaut | Mobo: Asrock X399 Taichi | Ram: G.Skill Ripjaws V 32GBs (2x16) DDR4-3200 | Storage: Crucial MX500 500GB M.2-2280 SSD/PNY CS900 240GB SSD/Seagate Constellation ES.3 1TB 7200RPM/Toshiba X300 4TB 7200RPM | GPU: Zotac Geforce GTX 1080 8GB AMP! Edition | Case: Fractal Define R5 Blackout Edition w/Window | PSU: EVGA SuperNOVA G2 750W 80+ Gold | Operating System: Windows 10 Pro | Keyboard: Corsair Vengeance K70 with Cherry MX Reds | Mouse: Corsair M65 Pro RGB FPS | Headphones:  AKG K7XX Massdrop Editions | Mic: Audio-Technica ATR2500 | Speakers: Mackie MR624 Studio Monitors

 

Prince of Dark Rock:

CPU: AMD Ryzen 3 2200G(Temp/Upping to a Zen 2 CPU) | CPU Cooling: be quiet! Dark Rock Pro 4 | Compound: Thermal Grizzly Kryronaut | Mobo: Asrock x470 Taichi | Ram: G.Skill Ripjaws V 8GBs (2x4) DDR4-3200 | Storage: Crucial MX200 240GB SSD+Seagate Constellation ES.3 1TB 7200RPM | GPU: EVGA GTX 1060 6GB 6 GB SSC GAMING  | Case: Fractal Focus G | PSU: EVGA SuperNOVA G2 750W 80+ Gold | Optical Drive: Random HP DVD Drive | Operating System: Windows 10 Home | Keyboard: Gigabyte FORCE K83 with Cherry MX Reds | MouseRazer DeathAdder Elite Destiny 2 Edition Speakers: JBL LSR 305 Studio Monitors(At some point)

Link to post
Share on other sites

So what exactly do I need to look for to get something like this setup? (I don't mean exactly what is in the video but just some RDMA networking). Like what components and what would connect to what etc. I'm not expecting to fork out right now, but this truly interests me so looking to get knowledge (I have literally no clue).

Link to post
Share on other sites

What's up with the very shaky camera movements throughout this video?


Intel Core i5 4690K 4.2GHz | ASRock Z97 Extreme4 | Kingston HyperX 16GB DDR3 | EVGA 960 4GB SSC SLI | Samsung 850 EVO 500GB | EVGA SuperNOVA NEX 750B | Corsair Hydro Series H50

Link to post
Share on other sites
On 9/29/2018 at 3:23 PM, kedstar99 said:

As someone well versed in this tech sector, all I can stay is god damn you guys are dumb.

 

You guys went with Mellanox, a place with some seriously shitty driver, cable and support. You should see how many CRCs we get on Mellanox switches and cards at work. The jitter on those pieces of shit are insane. Should have gone for a solarflare NIC for that kind of thing.

 

Second, running this on Windows? Bitch please, that kind of latency on their shitty tech stack is ridiculously bad. You should be using Linux and onload for that kind of thing.

 

Then no SPDK or NVMEoF? That is the future, not this infiniband and other crap. Almost as low latency as native storage with that tech. That kind of thing is really new though, but is getting mainstreamed in the Linux kernel.

 

There is a reason why so many algorithmic, financial, CDNs, clouds and OEMs use solarflare NICs.

I completely agree with your statement about running this on Linux instead of Windows. RedHat has had this since I believe 2013 at least? https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_infiniband_and_rdma_networks

 

And even if Linus went with Mellanox, the least he could have done is run it on their Linux Distro OpenFabrics Enterprise Distribution with built in RDMA.

Or, he could have taken example from Mellanox's Linux Switch which runs InfiniBand as well.

I know Linus is just serving the masses with running Windows and such but when he gets into the technical stuff like this, he really should switch to the platform using it by and large. If he has to, bring Wendell in to demo it off. I am sure no one would blame him and Wendell really brings all the "techs to the yard" as it were ^_^.

Link to post
Share on other sites
5 hours ago, Spuz said:

So what exactly do I need to look for to get something like this setup? (I don't mean exactly what is in the video but just some RDMA networking). Like what components and what would connect to what etc. I'm not expecting to fork out right now, but this truly interests me so looking to get knowledge (I have literally no clue).

a Linux server distro for one. (usually $0). Then get a used 100GbE card, the inifiniBand cable and a switch that can handle it.

Link to post
Share on other sites

I have a suggestion, try the same tests again, but on Linux. If I remember correctly, Linux uses the hardware differently than windows where I think it has more direct access to hardware over Windows where it has to pass through a few layers before things get accessed. So if that's not to much trouble, I'd love to see you guys run this test again under Linux Ubuntu. 


AMD Ryzen 2 2700 3.2Ghz Pinnacle Ridge | Asus ROG X470-F GAMING | Corsair Vengeances RGB 32GB 3000Mhz | EVGA Nvidia Geforce GTX 980 Ti | EVGA G2 SuperNova 750 Watt PSU

Link to post
Share on other sites

Hey can you test them in a scenario when you have multiple Windows roaming profiles stored on one server and each profile is over 80gb in size to see how long it takes the profiles to load if they all startup at once? 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×