Jump to content

We just leveled up HARDCORE - Fibre Networking Adventure

jakkuh_t

Sooo just to be clear… What is Fibre good for?

 

A spinning disk hard drive gets like 100 MB/s…. NVME and SSD are faster but not 25GB.

 

DDR3 Ram goes almost that fast.

 

So fibre optic cabling is only used for virtual machines? Or some kind of application that can essentially read directly from a server’s RAM?

 

Cause i thought SAN raid arrays etc could benefit from fiber optic cabling. But i dont see how that’s possible. Would love to see a video with fibre hard drives  if that’s what’s being used. Or maybe the SANs just use a huge cache on a super fast disk and still run fibre optics to clients

 

And I’m honestly just asking. Because i saw they posted 12GB/s read and 1 GB/s write. And I’m wondering whats at the end of the server to let them do that. Is this an iperf stat? Or reading/writing from a hard disk?

 

And also the PCIE cards or whatever that give them fibre optics to each server would be good to link to. Seems a lot of important info is missing altho it was pretty funny to watch the other stuff

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, mrhempman said:

Sooo just to be clear… What is Fibre good for?

 

A spinning disk hard drive gets like 100 MB/s…. NVME and SSD are faster but not 25GB.

 

DDR3 Ram goes almost that fast.

 

So fibre optic cabling is only used for virtual machines? Or some kind of application that can essentially read directly from a server’s RAM?

 

Cause i thought SAN raid arrays etc could benefit from fiber optic cabling. But i dont see how that’s possible. Would love to see a video with fibre hard drives  if that’s what’s being used. Or maybe the SANs just use a huge cache on a super fast disk and still run fibre optics to clients

 

And I’m honestly just asking. Because i saw they posted 12GB/s read and 1 GB/s write. And I’m wondering whats at the end of the server to let them do that. Is this an iperf stat? Or reading/writing from a hard disk?

 

And also the PCIE cards or whatever that give them fibre optics to each server would be good to link to. Seems a lot of important info is missing altho it was pretty funny to watch the other stuff

Imagine a datacenter with 42U cabinets ... you have 24-48 1U servers (could have half size servers, so 2 servers per 1U) each with a 10gbps network card, going into a 24-48 x 10g / 1-2 x 100g switch  - that 100 gbps is shared among the 24-48 servers, each server could peak at 10 gbps, but if more than 10 servers peak at 10 gbps, then their throughout may drop

 

You won't use ethernet cable from the cabinet's switch to your datacenter router or routers, the router may be 50-100 meters away in the corner of the room somewhere ... and you don't have ethernet cable that can do 100 gbps. You'd use two separate fibers, and you'd probably connect one 100gbps port to one router, and the other 100 gbps port to a backup router, for redundancy (if one router craps out, you just power on the other router or switch to the live backup)

 

What could you use 10 gbps for ?  For streaming for example ... imagine you have a popular FM radio in your country, which has 10k listeners at peak hours or most popular shows  ... that's 96 kbps x 10000 = 960,000 kbps or 960 mbps ... you wouldn't use a 1gbps network card, you'd use a 10 gbps one or you'd use multiple servers and push listeners to random viewers.

But radio streaming is easy ... do that for live broadcasts, like for example when Linus does a live thing on Youtube where they can end up with 30k+ simultaneous viewers each watching stream with 10-20 mbps bitrate

 

Right now, Pat McAffee show has 56k viewers on Youtube  :

 

image.png.9f9b527c3ef95b4a35c31dd0755de09c.png

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks. That makes a lot of sense. I could see why fibre would be used in that scenario.

 

But in linus’s case, he streams to YouTube which then streams to clients. So YouTube would be the one using fibre in this scenario right? 

 

What about examples on the backend intranet like Linus probably has? Cause personally I only have Gigabit internet to the outside world anyway. But im trying to build a SAN and i would like to do a lot of work with videos. I’m mainly interested in being able to load one video to another computer super fast. But unless that video is constantly held in DDR3 memory or maybe NVME, is there really an application for fiber here?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mrhempman said:

Thanks. That makes a lot of sense. I could see why fibre would be used in that scenario.

 

But in linus’s case, he streams to YouTube which then streams to clients. So YouTube would be the one using fibre in this scenario right? 

Linus uploads the video to a Youtube server in a datacenter close to him (either in Canada, or somewhere close to Canada).

Youtube automatically pushes the upload in real time to multiple of their datacenters around the world (us, europe, asia, south america etc), and from those servers A LOT of servers will take the source and convert it to various resolutions and serve it to users ... between you and Linus there's multiple servers handling what he uploads.

So yeah, when you connect to a live stream, you're most likely connecting to a random server out of hundreds of servers in a datacenter physically close to you, which copy the video from other Youtube servers further away (Google has their own fiber cables so they can move data between datancenters)

 

1 hour ago, mrhempman said:

What about examples on the backend intranet like Linus probably has? Cause personally I only have Gigabit internet to the outside world anyway. But im trying to build a SAN and i would like to do a lot of work with videos. I’m mainly interested in being able to load one video to another computer super fast. But unless that video is constantly held in DDR3 memory or maybe NVME, is there really an application for fiber here?

 

 

Linus has a storage server with lots of hard drives and a server with a lot of SSDs which holds the content actively worked on (they record and edit multiple videos at same time). The editors don't  (or rarely) store content on their local computers, they retrieve the content directly from these servers so for that, to be able to smoothly browse the content in a video editor, to quickly jump to a location, for snappiness, it makes sense to have 10 gbps or higher network cards. I think they're now on 25 gbps.

 

An M.2 SSD can read data at up to 7 GB/s and write it at up to 5-6 GB/s , in the case of a pci-e 4.0 nvme SSD. 

 

10 gbps is 1.2 GB/s , it can be achieved with a bunch of SATA SSDs in a RAID should you want to, it's nothing special.

25 gbps is around 3 GB/s ... it can be achieved with a 50$ nvme SSD, using pci-e 3.0 

40 gbps and higher cards have other benefits, like DMA transfers, on card hashing / encryption , stuff that reduces CPU usage

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, mrhempman said:

Sooo just to be clear… What is Fibre good for?

 

A spinning disk hard drive gets like 100 MB/s…. NVME and SSD are faster but not 25GB.

 

DDR3 Ram goes almost that fast.

 

So fibre optic cabling is only used for virtual machines? Or some kind of application that can essentially read directly from a server’s RAM?

 

Cause i thought SAN raid arrays etc could benefit from fiber optic cabling. But i dont see how that’s possible. Would love to see a video with fibre hard drives  if that’s what’s being used. Or maybe the SANs just use a huge cache on a super fast disk and still run fibre optics to clients

 

And I’m honestly just asking. Because i saw they posted 12GB/s read and 1 GB/s write. And I’m wondering whats at the end of the server to let them do that. Is this an iperf stat? Or reading/writing from a hard disk?

 

And also the PCIE cards or whatever that give them fibre optics to each server would be good to link to. Seems a lot of important info is missing altho it was pretty funny to watch the other stuff

Outside of what mariushm talked about a big part of why we talked about fibre optical cables in my electronic properties class is the fact that it handles distances, avoids certain electronic interferences that copper cables have and are also lighter with the optical cable itself being able to be less expensive than the equivalent copper cable with the issue being the translation from electrical to optical and back being the real big cost.

Link to comment
Share on other sites

Link to post
Share on other sites

Damn yea it looks like i was thinking the differences are bigger than they are because i was confusing GB/s with Gb/s…. I put out a lot of bad information so let me try to correct it once and for all.

 

You’re only going to be as fast as the slowest link in the chain. I didn’t analyze CPU speed, or PCIE lanes, but an x8 PCIE 3.0 card (very important to check) should be faster than all of these internet connections. But SATA 3 maxes out at 0.6 GB/s

 

So putting everything in GB/s…

 

Gigabit internet : 0.135 GB/s

10-Gigabit internet : 1.35 GB/s

16-Gigabit fiber: 2.0 GB / s 

25-Gigabit fiber: ~3.0 GB /s

 

Spinning disk hard drives are about 0.1 GB/s —-> Gigabit internet maxes out here

 

SSD drives vary but let’s say they’re 0.4 GB/s —-> You’d want at least 10 Gigabit  (1.35 GB/s) internet for these*

 

NVME drives are about 2 to 3.5 GB/s —> Seems like fastest possible fiber would be necessary here

 

RAM is about 6 GB/s or higher —> You’d also want the fastest possible fiber here for sure. I think this Would be the one if you’re looking at a VNC server

 

*Now if multiple people are accessing the same drive at the same time you’d want faster than this. Like @mariushm said

 

*Likewise if you’re running cable over a long distance, especially for 10 Gigabit, it’s smart to switch to optical fiber for why @Ultraforce said. I think 10 Gigabit loses performance at about 40 meters

Link to comment
Share on other sites

Link to post
Share on other sites

You're a bit misinformed ..

 

RAM is around 20 GB+/s  - Ex 3200 Mhz ram  (1600 Mhz x 2 transfers per Hz) is 64 bit per edge x 3,200,000,000  / 8 (8 bits in a byte) =  25,600,000,000 bytes = 25600 MB = 25.6 GB/s 

Faster in dual channel etc

 

The current 4-16 TB mechanical drives peak at around 260-300 MB/s ... on the most outer tracks they may get lower throughput, but still should do at least 150 MB/s ... but it's not the sequential speed that's a problem with mechanical drives, it's the random access. 

 

A single user or two users could connect to your computer through network shares or let's say to a FTP server you host and start a transfer and saturate your 1 gbps ethernet connection, and your hard drive would keep up. But, have 5-10 users each transfer a different file (ex 5-10 videos) and the mechanical drive's performance would tank and may not sustain 1 gbps due to the constant seeking and buffering of those 5-10 different files.

 

SATA is 6 gbps, but it uses 8:10 encoding , meaning for every 8 bits, there's 2 bits of error correction used. 

So, the maximum throughput  is  6 x 0.8 gigabits per second = 4.8 gigabits per second. There's 8 bits in a byte, so 4.8 gigabits per second / 8  = 600,000,000 bytes  which is either 600 MB or 572 MiB (if you use multiples of 1024) 

Due to overhead, you can expect maximum around 560 MB/s of throughput from a SATA 3 device. 

 

1 gbps (gigabits per second) = 0.125 gigabytes per second = 125 MB/s  - realistically you can expect around 122-124 MB/s of throughput after tcp/ip overhead 

10 gbps = 10 x 1 gbps = 1250 MB/s  - realistically expect 1200-1220 MB/s 

Note (rare but not unheard of) some onboard network cards are connected to the system through a single pci-e 3.0 lane, which means the maximum possible is reduced to around 985 MB/s, and that's the maximum pci-e can do, but after removing overhead of pci-e data packets, a 10 gbps card on  single pci-e lane could still achieve around 920-950 MB/s 

25 gbps = 3.125 GB/s = 3125 MB/s 

40 gbps = 5 GB/s 

 

pci-e maximum theoretical bandwidth - I've highlighted the maximums theoretical for M.2 storage (if using 4 lanes) :

Note there's overhead as I said, a few percent of bandwidth is not uncommon - that's why for example most pci-e 3.0 SSDs peak at around 3.7 GB/s when reading, because the difference of a few hundred MB is lost in protocol / overhead.

 

image.png.0cdcf375e943343a540da7e6f3aa24cd.png 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×