Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
EJMB

Internet speed record shattered at 178 terabits per second

Recommended Posts

Wait there's existing 35 terabit deployments? Who the fuck has that shit, literal network backbone providers?


Coming Soon: MOAR COARS: 5GHz Confirmed Black Edition™ The Build

 

Link to post
Share on other sites

I have a feeling that the entire Netflix catalog is larger than 22TB.


MacBook Pro 16

i9-9980HK - Radeon Pro 5500m 8GB - 32GB DDR4 - 2TB NVME - many dongles

Link to post
Share on other sites
11 hours ago, S w a t s o n said:

Wait there's existing 35 terabit deployments? Who the fuck has that shit, literal network backbone providers?

Using DWDM, yes. Data center interconnections for example use this as well.


Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to post
Share on other sites
1 hour ago, LAwLz said:

Does anyone know if they mentioned range anywhere? 

Based on the repeater mention at the end of one article I would imagine this could do 40km to 100km depending. Probably being in the lab they did it at much shorter distances though.


Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to post
Share on other sites

Very cool but does that mean my internet plan gets a discount?


Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 5 3600 @ 4.1Ghz          Case: Antec P8     PSU: G.Storm GS850                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition @ 2Ghz

                                                                                                                             

Link to post
Share on other sites
1 hour ago, williamcll said:

Very cool but does that mean my internet plan gets a discount?

By discount you mean you donate more to your ISP for no change in service? Then yes!


Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to post
Share on other sites
Posted · Original PosterOP
On 9/5/2020 at 11:30 PM, Lurick said:

I was debating posting this but glad someone did as it's really awesome to see this stuff coming around. The Australian fiber comb is really what excites me as that could mean huge shifts in bandwidth for data center applications and really boost capacity without much/any changes to the fiber.

To put this into perspective as to how many (non-DWDM) network ports this is:

800Gb ports = 223 total ports

400Gb ports = 445 total ports

100Gb ports = 1780 total ports

 

At 36 ports of 100Gb per linecard you're looking at just over three Nexus 9516 chassis fully maxed out with linecards. The 9516 is a 16 slot 21RU chassis (~3 feet tall). Power draw for ONE chassis fully loaded is around 19.5kW under max load. So you're looking at something like 65kW to push that much bandwidth across three chassis. Now I do want to state this example is not using DWDM which can put multiple lambdas on a single cable and compact things down but just for scale of what it could cost in terms of power/footprint from a data center perspective.

 

If you use something with DWDM then you could probably get it into a single chassis and probably drop the power draw and footprint down a lot it just depends on the number of lambdas you get on a single fiber as to how many ports/linecards you would need. So if you assume around 100 lambdas totaling 10Tb/s on a single fiber then you're looking at 18 fibers so you can see how things shrink down.

You should've been the one to have posted the topic. You are much more knowledgeable than me or most people about things like these! 😁

Link to post
Share on other sites
Posted · Original PosterOP
On 9/6/2020 at 1:04 AM, Centurius said:

17c.png

178 Tbps is roughly 22.3 TB of storage, there is no way the Netflix library is that small. Even the Open Connect appliances they install in ISP datacenters for faster speeds to end users contain ten times that and those don't contain the entire library.

 

This isn't even remotely intended for actual widespread use. These speeds are possible only under incredibly specific conditions and at a price that no consumer or business can afford. At best you'll see this used for site to site connections between research institutes and the like where they can actually utilize that datarate. Most of the world is still on 10 Mbps or less, the high end for consumers has only the last year or two started moving into >1 Gbps speeds for home networks and less than that for internet. Even the largest companies often don't have more than a 400 Gbps uplink and getting even that costs millions. By the time a regular consumer can download at 178 Terabits, we'll likely have colonies on Mars.

I wasn't referring to applications on the average consumer level. I'm talking about cloud computing to be used my national governments and corporations and the unfortunate possibility that everything (and I MEAN EVERYTHING 😬😰 from your personal info, routes your cars take in the city, to all the cameras and what they see, to all public and private records, you name it and there's a possibility it can be accessed by the internet then it's possible all the data about everyone and what they do and it's electronic record being stored in "hubs") being shared in hubs within countries connected by these. I just feel like it might set a dangerous precedent. I might be overthinking it though...these scientists were just transferring scientific data and knowledge but governments and massive multinational corporations (with evil intent in mind) might not be so kind in using technologies like these.
You could even argue that technologies like these could allow everyone to "control" everything from their own home using speeds like these shared across multiple cities.
TLDR; Basically, there's a multitude of applications for such and to view it at just use for the average consumer is not what I was going for. 😅

Link to post
Share on other sites
On 9/4/2020 at 10:42 PM, Trik'Stari said:

Can we please not do cloud for everything?

 

The best argument will always be "what if it goes down".

Imagine a home network with a dedicated server room connected via these cables, with ssds capable of handling them. I don’t think there’s a port capable of that transfer speed in the consumer market yet, is there?

I mention home networks because /off the grid /generators

Link to post
Share on other sites
On 9/13/2020 at 4:00 AM, EJMB said:

I wasn't referring to applications on the average consumer level. I'm talking about cloud computing to be used my national governments and corporations and the unfortunate possibility that everything (and I MEAN EVERYTHING 😬😰 from your personal info, routes your cars take in the city, to all the cameras and what they see, to all public and private records, you name it and there's a possibility it can be accessed by the internet then it's possible all the data about everyone and what they do and it's electronic record being stored in "hubs") being shared in hubs within countries connected by these. I just feel like it might set a dangerous precedent. I might be overthinking it though...these scientists were just transferring scientific data and knowledge but governments and massive multinational corporations (with evil intent in mind) might not be so kind in using technologies like these.
You could even argue that technologies like these could allow everyone to "control" everything from their own home using speeds like these shared across multiple cities.
TLDR; Basically, there's a multitude of applications for such and to view it at just use for the average consumer is not what I was going for. 😅

Again, even those don't have access to that kind of scale. Easily one of the largest corporations in the world when it comes to datacenter usage is Amazon, and at most internet exchanges they have 100 Gbps uplinks with a few key IXs having 400 Gbps uplinks. Most government departments are connected to these locations with 10 Gbps ports and often even just Gbps. The largest hub of exchanges, the Equinix Exchange has a maximum throughput of 18 Tbps. That's every single datacenter of theirs and every single peering connection. Basically most of the internet traffic on the world, and they barely have one tenth of the single connection illustrated by these scientists. So I stand by my timeline, Mars before this hits any kind of mainstream.


My Build:

Spoiler

CPU: i7 4770k GPU: GTX 780 Direct CUII Motherboard: Asus Maximus VI Hero SSD: 840 EVO 250GB HDD: 2xSeagate 2 TB PSU: EVGA Supernova G2 650W

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×