Jump to content

We Got 100 Gigabit Networking!... HOLY $H!T

jakkuh_t

Back in 2018, we tried out a direct system-to-system 100GbE link with the help of Mellanox. This time, we've got a switch and a plan to deploy what will be our fastest networking EVER. HOLY $H!T.

 

 

widget.png?style=banner2

PC: 13900K, 32GB Trident Z5, AORUS 7900 XTX, 2TB SN850X, 1TB MP600, Win 11

NAS: Xeon W-2195, 64GB ECC, 180TB Storage, 1660 Ti, TrueNAS Scale

Link to comment
Share on other sites

Link to post
Share on other sites

Cant wait for Linus to stream building a crap ton of PC because 12K and 100Gb networking /s

Chicago Bears fan, Bear Down

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's cool and all, and some really impressive numbers but what is the benefit?

I get the advantages to Linus, CEO of LMG but what about me, the viewer?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Sacredsock said:

It's cool and all, and some really impressive numbers but what is the benefit?

I get the advantages to Linus, CEO of LMG but what about me, the viewer?

If you’re asking this, there isnt really any, its only useful if youre like a company with tons of large files honestly.

Chicago Bears fan, Bear Down

 

Link to comment
Share on other sites

Link to post
Share on other sites

given how cheap 100gb network switches now are used I expected a real one, nice clickbate to swap it for a 48x25gb with 6 100gb uplinks

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, GDRRiley said:

given how cheap 100gb network switches now are used I expected a real one, nice clickbate to swap it for a 48x25gb with 6 100gb uplinks

I mean they did say they were going 25GbE to the editors in the last video so.....

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, Sacredsock said:

It's cool and all, and some really impressive numbers but what is the benefit?

I get the advantages to Linus, CEO of LMG but what about me, the viewer?

I have the same switch spec in my house is awesome.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Lurick said:

I mean they did say they were going 25GbE to the editors in the last video so.....

right but the title says 100gb. and its easy to take a 100gb port and feed 4 25gb lines out of it

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, GDRRiley said:

right but the title says 100gb. and its easy to take a 100gb port and feed 4 25gb lines out of it

I mean you could technically say it's 100Gbps to the server which is correct. I know you can easily do breakout cables from 100Gb ports to get 128x25GbE ports off the switch but eh.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

Why use MM fiber?  savings on the sfp's? 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Dkte said:

Why use MM fiber?  savings on the sfp's? 

Probably just because of the lack of need for distance but you're right, there isn't much point and savings aren't exactly amazing between MM and SM 25G optics either. $20 difference each so maybe $2000 or so total for 48 ports on the switch and the PC cards, assuming they even populated all the ports.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

I am a bit 'ocd-triggered' because the captioning is wrong a bunch despite it being manually done…

eg;

"editor you're going to have to bleep all that" -> "(indistinct)"

"Whonnock server" -> "One-X server"

 

Link to comment
Share on other sites

Link to post
Share on other sites

Linus: "well, if we run Linux we'd probably get the 100Gb/s"

 

Well, why don't you! You've done it for the server video.

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to comment
Share on other sites

Link to post
Share on other sites

To be fair, Direct Attached Copper cable can reach a fair few meters.

For short runs, fiber is frankly only adding extra unreliability from a system standpoint. (Fiber is notorious for any ingress of dirt to degrade the connection by a noticeable amount, even if it looks clean to the eye, it can still be dirty.... Electrical connections care a lot less about dust.)

 

Though, after about 10-15 meters, DAC isn't really an option for 25 GB networking...

 

Also, the reason for why RJ45 connections needs more processing compared to SFP modules, the reason for this is because of how simple SFP signaling is compared to multi gigabit Ethernet over RJ45. Multi gigabit Ethernet sends data over 4 differential pairs following a fairly complex protocol for signal integrity reasons. SFP and QSFP ports are simple optocouplers in comparison. Ie, put a 1 on the input of A, and the output of B will soon be a 1 as well. SFP modules does have specs for clock jitter though. But a 25 Gb/s module is having low enough jitter to be able to do 25 Gb/s. The "QSFP" modules just have 4 inputs and 4 outputs, so it is just like having 4 SFP modules in one, sharing 1 fiber, but they work very similarly. SFP/QSFP modules also have an I2C port too, but this is mostly for getting the spec sheet from it, so that a switch knows its limits. Or vendor locks what SFP modules one can use.....

 

In the end, 100 Gb/s networking is an interesting thing, and for video editing it should be an advantage compared to the plebian 10 Gb/s offers. Though, personally, I would go with 2 links per workstation, after all, you have the ports on the switch....

Link to comment
Share on other sites

Link to post
Share on other sites

Great video. This was fun, and I can't wait to see him install it in the server room. There hasn't been a server upgrade video in a while.

What is actually supposed to go here? Some people put their specs, others put random comments or remarks about themselves or others, and there are a few who put cryptic statements.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh God, not a Dell switch.

 

Edit:

No Linus, you do not have dual 750W power supplies to "run the fiber optic transceivers that physically light up the laser that shoots out of here and goes into the NIC on the other end".

An SFP uses something like 1.5 watts of power on average, often even less. Even if you populate all 48 ports that's still less than 100 watts.

If I remember correctly, the SFP+ spec only allows up to 1.5 watts of power, and that's why 10Gbps copper SFPs aren't really a thing. 

 

 

 

3 hours ago, Nystemy said:

For short runs, fiber is frankly only adding extra unreliability from a system standpoint. (Fiber is notorious for any ingress of dirt to degrade the connection by a noticeable amount, even if it looks clean to the eye, it can still be dirty.... Electrical connections care a lot less about dust.)

Dude... What? My experience is the exact opposite, and I have plugged in quite a bit of DAC and SFPs in my days.

I've seen more DAC cables fail than SFPs, and that's with fewer DAC cables installed than SFPs, not to mention they are a pain in the ass to install because they are so stiff. Plus they perform worse (in terms of latency). There is no reason to use DAC cables except maybe price.

Doing good cable management is way easier with fiber cables than DAC cables as well.

 

Getting a good signal with fiber isn't difficult in a data center or like what Linus has, a closet with a rack in it. Just clean the connector, plug it in and check the RX and TX values. Once the cable is there dirt won't get in and you can just forget about it.

 

But what I don't get is why you bring this up when nobody in the thread has mentioned DAC cables, nor does Linus use it in the video. They used MM fiber.

Link to comment
Share on other sites

Link to post
Share on other sites

Aggregating connections can make sense, more so if your servers and switch are sitting shotgun in the same rack.

 

Case in point - Supermicro 6049GP-TRT. You could throw 20 Honey Badgers in that thing and link the two on board 10GbE ports. May not be practical, but swap out JUST ONE of those cards for a 200Gb/s adapter going to your switch and you're flying.

 

For a home setup where longer distances are in play you could run multiple Cat5E lines, although it's hard to beat a single 10GbE for convenience. Now if Cat7 was as simple to work with as Cat5E and those 10GbE switches for home use made more accessible.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Nystemy said:

Also, the reason for why RJ45 connections needs more processing compared to SFP modules, the reason for this is because of how simple SFP signaling is compared to multi gigabit Ethernet over RJ45. 

This is not a concern and I do not know why Linus brought this up. The only processing is of EC and signalling that only happens within the ASIC tied to the port prior to the switching ASIC. The processing is not tied to the switch ASIC and CPU and would like to know where Linus got that information from. The two are not at all related

 

The actual reason it's not copper is because 10gig copper runs very hot and even more so with copper SFP+. 24/48 port enterprise 10gig and up switches will be SFP only just from a heat and power standpoint and even then have a limit on how many copper SFPs can be used in the switch.

 

1 hour ago, LAwLz said:

Once the cable is there dirt won't get in and you can just forget about it.

Fiber is more robust than people are making it out to be.  it takes a fair bit of dust before errors will build up and even then few clicks with the cleaner and that's it. Most issues I run into with customers, DACs are almost always related to the issue.

 

Quote

why 10Gbps copper SFPs aren't really a thing.

Copper SFP+ are most definitely at thing. They're hot and the length is cut down to a few meters because of it.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, LAwLz said:

Dude... What? My experience is the exact opposite, and I have plugged in quite a bit of DAC and SFPs in my days.

I've seen more DAC cables fail than SFPs, and that's with fewer DAC cables installed than SFPs, not to mention they are a pain in the ass to install because they are so stiff. Plus they perform worse (in terms of latency). There is no reason to use DAC cables except maybe price.

Doing good cable management is way easier with fiber cables than DAC cables as well.

 

Getting a good signal with fiber isn't difficult in a data center or like what Linus has, a closet with a rack in it. Just clean the connector, plug it in and check the RX and TX values. Once the cable is there dirt won't get in and you can just forget about it.

 

But what I don't get is why you bring this up when nobody in the thread has mentioned DAC cables, nor does Linus use it in the video. They used MM fiber.

In regards to reliability and stiffness.
DAC cables have a minimum bend radius, typically stated in its datasheet. If bending the cable sharper than that, then it has a large risk of failing. This needs to be taken into consideration when doing cable management, otherwise the cable will likely fail. DAC cables aren't alone in this either, most fiber cables are typically rated to bends no sharper than a 2 inch radius, though some survive far sharper bends than this.

 

In regards to latency, have you checked? (Not that the latency difference matters, package processing has far larger impacts on latency, even in an unbuffered switch.)
The fiber module itself tends to introduce some latency, especially QSFP ones that multiplexes data over a single fiber.
And propegation speed in fiber compared to copper. Then the typical propagation speed in fiber is around 16-20 cm/ns, while most coaxial cables are between 20-28 cm/ns. (DAC cables practically use a coax from a signaling point of view.)

 

In regards to dirt on fiber connections.

Yes, in a datacenter it isn't generally a major issue. Especially if one has appropriate cleaning tools. But most people running a fiber or two frankly doesn't have the tools for it. And this is why it can be worth while to inform people that a more dirt resistant option exists for shorter runs at least.


And to answer: "But what I don't get is why you bring this up when nobody in the thread has mentioned DAC cables, nor does Linus use it in the video."
That is precisely why, no body has said a thing about it, even if it is appropriate for the application. And in some cases preferred.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

Dude... What? My experience is the exact opposite, and I have plugged in quite a bit of DAC and SFPs in my days.

I've seen more DAC cables fail than SFPs, and that's with fewer DAC cables installed than SFPs, not to mention they are a pain in the ass to install because they are so stiff. Plus they perform worse (in terms of latency). There is no reason to use DAC cables except maybe price.

Doing good cable management is way easier with fiber cables than DAC cables as well.

Yep totally agree, DACs get very stressed at each end when having to plug/unplug them or ones around it. Cable management and density is a huge problem with them (even the newer thin ones are still not match for OM3/OM4 fibre). Not even price, FS module pricing when buying in bulk is very good and as long as you aren't a gorilla you can reuse the cables and modules as much as you like, where as no matter how careful you are with DAC cables they will fail if you repeatedly remove/ruse them. 

 

I hate DAC cables, I got work to buy fibre replacement for everything. That's how much I hate DAC.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, mynameisjuan said:

This is not a concern and I do not know why Linus brought this up. The only processing is of EC and signalling that only happens within the ASIC tied to the port prior to the switching ASIC. The processing is not tied to the switch ASIC and CPU and would like to know where Linus got that information from. The two are not at all related

 

The actual reason it's not copper is because 10gig copper runs very hot and even more so with copper SFP+. 24/48 port enterprise 10gig and up switches will be SFP only just from a heat and power standpoint and even then have a limit on how many copper SFPs can be used in the switch.

Though, I never stated that the Ethernet processing happens in the main switching ASIC. On some switches it does, but in others it is handled by the line drivers themselves. It depends a bit on the switch. But it is the line drivers that tends to run warm...

But the reason Linus brought it up were mainly that he thought the switch had very little cooling for its performance. And then mostly made an off the cuff comment in regards to the fact that most Ethernet switches tends to have a large heatsink down by the ports, while this switch doesn't.

The Switch "could" have 8P8C ("RJ45") ports, though, it currently is a bit exotic to run 25 Gb/s over Ethernet.

Not to mention that this switch isn't really all that new....

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, Nystemy said:

DAC cables have a minimum bend radius, typically stated in its datasheet. If bending the cable sharper than that, then it has a large risk of failing. This needs to be taken into consideration when doing cable management, otherwise the cable will likely fail

They always fail at the ends, like every cable ever. The anti-stress/anti-split boots just don't work if they even have them which most do not and if you have worked with dual TOR 48 port switches full with DAC cables you will know how literally impossible it is to cable manage that and to not stress the cables at the ends where the cable goes in to the SFP connector.

 

Bend radius just isn't an issue or why they fail, and because of how rugged the cables are you can never tell they are internally damaged until it just doesn't work.

 

It's hard to express just how much I do not like working with DAC in high density, you can never be sure which cable is which and getting to the cable labels is difficult and you risk pulling out other cables along with the one you are taking out just from friction with other cables and poor latching strength of the SFP ports in switches. In every single way DAC is worse than fibre unless you need the cable protection DAC cables offer for some reason, which you shouldn't need inside a server rack.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Nystemy said:

But the reason Linus brought it up were mainly that he thought the switch had very little cooling for its performance.

He specifically states "there is more processing involved in the switch side of things"  which is not true.

 

As far as less cooling, the SFP and SFP cage is the cooling with air being pulled in in and around them to dissipate the heat. There is actually more cooling.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, leadeater said:

dual TOR 48 port switches full with DAC cables

Should be a termination for whoever made that decision. Working with and around even a handful of DACs is a nightmare and from a support standpoint with their failure rates, even more so.

 

There is a reason why SM optics are becoming the only option in enterprise as companies are ditching MM and DAC as SM cost difference diminishes. No bulk, no replacing the entire DAC, no replacing MM runs when upgrading past what MM can do, patch cables are cheap if a kink occurs, less heat, less power... Yeah I can back up your hatred for DACs but I'm throwing MM in the same pile.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, leadeater said:

They always fail at the ends, like every cable ever. The anti-stress/anti-split boots just don't work if they even have them which most do not and if you have worked with dual TOR 48 port switches full with DAC cables you will know how literally impossible it is to cable manage that and to not stress the cables at the ends where the cable goes in to the SFP connector.

 

Bend radius just isn't an issue or why they fail, and because of how rugged the cables are you can never tell they are internally damaged until it just doesn't work.

 

It's hard to express just how much I do not like working with DAC in high density, you can never be sure which cable is which and getting to the cable labels is difficult and you risk pulling out other cables along with the one you are taking out just from friction with other cables and poor latching strength of the SFP ports in switches. In every single way DAC is worse than fibre unless you need the cable protection DAC cables offer for some reason, which you shouldn't need inside a server rack.

The minimum bend radius applies all the way along the whole cable, even into the SFP module itself.

A lack of a proper strain relief is the root cause of your issues, and a poorly implemented pull tab for the lock.
Not to mention that a lot of sources on minimum bend radius gets it wrong and interprets it as a diameter.
And then we also have the issue of people just being poor at cable managing.
image.png.c33bda7f2a60e6fa035074c5e383839f.png
This picture for an example is far too tightly cable managed, some of the strain reliefs are over burdened by the tension on the cable. (the 3rd one on the bottom left is particularly tight)
The pull tabs for the lock is also fairly pathetic, they could be twice as long without issue, since the minimum bend radius is longer than that regardless.

Yes, a lot of DAC cables are made with questionable quality, but that doesn't mean that all DAC cables are bad.
It just means that one should stop buying DAC cables with pathetic pull tabs and non existent strain reliefs, or complain about the lack of proper strain reliefs when faced with such.

Fiber cables tends to have longer and more properly implemented strain reliefs, not to mention that people are also more careful when managing them.
Though, some strain reliefs are almost a joke. Either being a short hard stub (Like on a lot of USB cables...) or being an exceptionally flimsy one that could as well not be there...
A good strain relief should taper out along its length, as to slowly transition from the stiff connector/module over to the more bendy cable, typically this should be done for a distance that is exceeding 45 degrees of distance along the minimum bend radius. The root of the strain relief should also be sufficiently stiff to not bend on its own, otherwise the failure will just happen there instead.

This is true for any cable, be it DAC, Fiber, Coax, USB, regular network cables, or just a power cord.

In the end. It isn't the fault of the DAC cable, it is the fault of the penny pinching company making it that doesn't feel like spending 10-20 cents more on some plastic to make their product last 10x as long.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×