Jump to content

Easy 10x Network Speed Upgrade

1 minute ago, xnamkcor said:

How fast of a network would I need to keep up with these drive speeds?

 

iiryVmo.png

What's the other end? Strong portions of the chain will only be weakened by the others.

Cor Caeruleus Reborn v6

Spoiler

CPU: Intel - Core i7-8700K

CPU Cooler: be quiet! - PURE ROCK 
Thermal Compound: Arctic Silver - 5 High-Density Polysynthetic Silver 3.5g Thermal Paste 
Motherboard: ASRock Z370 Extreme4
Memory: G.Skill TridentZ RGB 2x8GB 3200/14
Storage: Samsung - 850 EVO-Series 500GB 2.5" Solid State Drive 
Storage: Samsung - 960 EVO 500GB M.2-2280 Solid State Drive
Storage: Western Digital - Blue 2TB 3.5" 5400RPM Internal Hard Drive
Storage: Western Digital - BLACK SERIES 3TB 3.5" 7200RPM Internal Hard Drive
Video Card: EVGA - 970 SSC ACX (1080 is in RMA)
Case: Fractal Design - Define R5 w/Window (Black) ATX Mid Tower Case
Power Supply: EVGA - SuperNOVA P2 750W with CableMod blue/black Pro Series
Optical Drive: LG - WH16NS40 Blu-Ray/DVD/CD Writer 
Operating System: Microsoft - Windows 10 Pro OEM 64-bit and Linux Mint Serena
Keyboard: Logitech - G910 Orion Spectrum RGB Wired Gaming Keyboard
Mouse: Logitech - G502 Wired Optical Mouse
Headphones: Logitech - G430 7.1 Channel  Headset
Speakers: Logitech - Z506 155W 5.1ch Speakers

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, ARikozuM said:

What's the other end? Strong portions of the chain will only be weakened by the others.

The exact same hardware.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, xnamkcor said:

The exact same hardware.

A single 10GbE direct connection would bottleneck the disks the least.

How large do you expect the most of your files to be?

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, xnamkcor said:

The exact same hardware.

10GbE and 2.4GB/s internet if going out-of-network.

Cor Caeruleus Reborn v6

Spoiler

CPU: Intel - Core i7-8700K

CPU Cooler: be quiet! - PURE ROCK 
Thermal Compound: Arctic Silver - 5 High-Density Polysynthetic Silver 3.5g Thermal Paste 
Motherboard: ASRock Z370 Extreme4
Memory: G.Skill TridentZ RGB 2x8GB 3200/14
Storage: Samsung - 850 EVO-Series 500GB 2.5" Solid State Drive 
Storage: Samsung - 960 EVO 500GB M.2-2280 Solid State Drive
Storage: Western Digital - Blue 2TB 3.5" 5400RPM Internal Hard Drive
Storage: Western Digital - BLACK SERIES 3TB 3.5" 7200RPM Internal Hard Drive
Video Card: EVGA - 970 SSC ACX (1080 is in RMA)
Case: Fractal Design - Define R5 w/Window (Black) ATX Mid Tower Case
Power Supply: EVGA - SuperNOVA P2 750W with CableMod blue/black Pro Series
Optical Drive: LG - WH16NS40 Blu-Ray/DVD/CD Writer 
Operating System: Microsoft - Windows 10 Pro OEM 64-bit and Linux Mint Serena
Keyboard: Logitech - G910 Orion Spectrum RGB Wired Gaming Keyboard
Mouse: Logitech - G502 Wired Optical Mouse
Headphones: Logitech - G430 7.1 Channel  Headset
Speakers: Logitech - Z506 155W 5.1ch Speakers

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, skywake said:

It really isn't. I'm all for 10Gbps to become cheaper but lets not kid ourselves. Most end users don't have any network storage and aren't transferring files across the network. And at this stage even the ones that do are copying those files between HDDs. And therefore well under 1Gbps.

It's starting to make sense actually. Take a look at a WD datasheet of a black 4TB drive. The speed is about 1.3Gbps for sequential operations. If you have SSDs, you could say that you usually transfer small files, but prices have come down.

I think the most likely reason why we still have 1gb ethernet is that storage only recently has become faster (and a lot faster). When 100mbps was faster than the hdd inside your box, who felt the need for 1gbps?

 

2 hours ago, skywake said:

And that person with the NAS and BluRay rips isn't even the average consumer. That person is ahead of the bell curve. The average consumer is more likely streaming videos over WiFi from the internet.

You are right. I wonder when streaming companies will be able to finally sell higher bitrate content, with the fast pace of improvement of internet service provider's networks.

 

4 hours ago, Razor512 said:

The sad thing is that a 10GbE adapter doesn't cost much more to make than a 1GbE adapter, and the steep pricing is just companies price gouging.

2 hours ago, Razor512 said:

Think of it like a PC hardware version of comcast.

1GbE sucks, but instead of putting some real effort into fixing it, the motherboard makers, and router makers, are just looking at us while doing this.

Yooo wait a moment, why don't you put first some effort into thinking about the issue?

How can you say that making a 10Gbe adapter does not cost much more to make than a 1 gbps one? You have to make a controller that works 10x times (or more?) faster than the old one, someone has to pay the engineers that design those chips, make the electronics required, write the software for them... and also the issue is that right now every 10gbe adapter you can buy targets the enterprise market, where every price has a 0 appended before the decimal dot. Now, I know this it is not a good example, but how much does intel charge you if you want a processor that is 10x faster than a 60$ pentium?

Now one thing is making a single 10GbE port adapter, another is making a 24 port switch, it's a completely different beast.

Here, take you calculator and find how much time you have to process an ethernet frame at 10gbps (assume 1500 and 9000 bytes frames). Now think about the switch, you'll never have all the 24 ports at 100% utilization, let's say that you'll have only 10 ports transferring at full 10gbps speed. How much time do you have now? (hint: use nanoseconds).

Now, you must remember that you need better error correction, because now there are 10^10 frames on the wire every second instead of 10^9. Can you afford to lose one every billion?

Also, if this stuff is anything like what I've studied, there's also to consider that you must measure your signals on all the wires at least 2 times faster than the speed of the trasmission (see Shannon sampling theorem).

Now that you've measured all your signals, you have to make sense of them, rebuild the frame somewhere and actually process it to decide what to do with it next, and the time you have available to do all this stuff is the time you calculated before.

For comparison, let's take an hypotetical x86-64 cpu clocked at 4 GHz: if I didn't screw anything, in the case of the 10 port switch and 9000 bytes frames, you have about (not telling you) clock cycles to do everything. Let's keep things simple, assume that every (assembler) instruction requires 1 clock cycle, so you read/write a 64bit chunk from/to memory in 1 clock (this is so impossible :P ), you add, compare, subtract, etc, 2 64 bit numbers in 1 clock, and so on. So, do all those cycles still look like you have time to spare?

Bear in mind that this example can't exist in real life, not with x86, you can't use a cpu direcly to measure signals on a wire, you have to use something like a microcontroller first (yeah not an arduino, you need to go a lot faster).

Now I want to see your effort, show the results.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, xnamkcor said:

How fast of a network would I need to keep up with these drive speeds?

 

Take your maximum values, and convert them. You've measured MB/s (megabytes per second), while networking units tipically use bits per seconds, so since a byte is made of 8 bits, transferring 1 bytes/s means transferring 8 bits/s... Applied to your case, you'd saturate a 10gbps link.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, fulminemizzega said:

I wonder when streaming companies will be able to finally sell higher bitrate content, with the fast pace of improvement of internet service provider's networks.

That won't be exceeding 1GbE for a home user, unless they're working high resolutions not available to anyone, at 12bit color, and DoblyAtmos (or equivalent) sound.

 

4 minutes ago, fulminemizzega said:

It's starting to make sense actually. Take a look at a WD datasheet of a black 4TB drive. The speed is about 1.3Gbps for sequential operations. If you have SSDs, you could say that you usually transfer small files, but prices have come down.

A single WD Black drive won't see seq. read/write without sequential read/write loads. In web environments, it often takes multiple home users accessing a site to do that. That's also assuming that web hosts are eager to spend money every time a new drive comes out because it's faster, which they aren't, at least not for a complete overhaul on all their systems.

 

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

For just $50 more you can get a managed 16x10Gb switch althought it is currently in beta. I use it and it works pretty well although DAC support is hit or miss until drivers are updated from the OEMs. 

 

https://store.ubnt.com/beta

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Drak3 said:

That won't be exceeding 1GbE for a home user, unless they're working high resolutions not available to anyone, at 12bit color, and DoblyAtmos (or equivalent) sound.

Yes sure, but the point I had in mind was that after they'll reach a high enough bit rate (for example 40mbps, based on [1]), you won't be able to justify the time to buy and rip disks, because you wouldn't see a difference. If anything this is another reason not to need 10gbe. But really, streaming is a bad usecase for 10gbe, if we're talking about a single user.

 

10 minutes ago, Drak3 said:

A single WD Black drive won't see seq. read/write without sequential read/write loads. In web environments, it often takes multiple home users accessing a site to do that. That's also assuming that web hosts are eager to spend money every time a new drive comes out because it's faster, which they aren't, at least not for a complete overhaul on all their systems.

I think this is a bad example. In web environments you don't have the faintest idea of where your data is stored. It could be on a SAN with a bunch of hdds, or a raid array of ssds, or it could be on different datacenters. More often it's cached in ram if you're after performance (see zfs arc). And a web workload usually is not sequential, so a single hdd won't even saturate a 1gb link.

The usecase I had in mind was, for example, backups. The point I wanted to drive home was that even hdds can sometimes exceed the speed of a 1gbps link, and as they get bigger in sizes, in sequential operations this will get worse. When you have ssds, why woud you use a single hdd to do random data access? That's why I wrote "It's starting to make sense": if you transfer a big enough file between 2 desktops at home, 1gbps can now be a bottleneck even with hdds.

[1] https://en.wikipedia.org/wiki/Blu-ray#Bit_rate

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yaay Asus joins the party, .. but ouch that price.

 

Depending on whether you already have 10Gbit ports on your motherboard, if you don't, here's a possibly better product.

 

https://routerboard.com/CSS326-24G-2SplusRM

 

$140 gets you a switch with 24x 1Gig and 2x SFP+ (for you uplink or server).  It can also do VLANs and some management.

 

To get SFP+ on the server/workstation/computer side of things, so you'd add a card like this one: http://www.ebay.com/itm/LOT-OF-2-MELLANOX-CONNECTX-2-10GbE-ETHERNET-NETWORK-SERVER-ADAPTER-MNPA19-XTR-/142046260117?hash=item21129deb95:g:3JoAAOSwXeJXe~lY

 

connectx3 are more expensive since they're pcie 3.0 .. x2 and x3 cards are both mechanically and electrically pcie 8x, but will work in 4x electrical slot. Also, even if you stick to pcie 2.0 you still get 16Gbps of bandwidth (each direction) at pcie 2.0 rates/encodings ... plenty if your goal is 10Gbps... It all should work just fine fine, but do check your motherboard manual for a block diagram, and be aware your SATA is usually right there on the PCH sharing the bandwidth of the DMI 2.0 (16Gbps total) or DMI3.0 (32Gbps total) and that some motherboard manufacturers connect some slots via the PCH.

 

Also, be aware that most motherboards don't ship with things like 4k pcie packet size enabled out of the box. You may want to look into this.

 

Hey but why's the price so different!

 

SFP+ has a lower power spec than 10GBASE-T (10 gigabit over copper utp/stp/cat6a/cat6). There are companies that make SFP+ transcievers that will work at 10gig rates for up to 30m over cat6a (lower power = shorter distance). They're specialty products that few people care about. Most people either use passive copper twinax / DirectAttachCopper/DAC cables for short distances, or so called "Active Optical Cables" (transceiver fixed to a fiber)", or traditional 10GBASE-LR (10 gigabit over single mode fiber).

 

Typically you'd use twinax / direct attach copper or dac to connect things within a rack, or perhaps even within a room or even other side of the wall. e.g. if your server is next to your switch, and you'd use traditional fiber for anything longer...

 

example 5m DAC cable: for about $25 http://www.fs.com/products/30760.html 10m is about $80 (compared to 10m cat6 which is like $5 .. as you can see it's not that economical at longer distances).

 

For traditional fiber, there's a choice of multimode for shorter distances, but these days most people go with singlemode fiber at any distance, Here's an example 10gbase-lr transceiver that works up to 10km range, but will also work to connect one floor in your home to another, or your garage to your house, or you friend/neighbour network to your network: $35  http://www.fs.com/products/11591.html , LC connectors are most popular these days (they're small enough for SFP/SFP+) and UPC finish at the end is what most people do (computer people, outside of catv industry), and here's an example of such cable: (30m for <$15) http://www.fs.com/products/40206.html . Notice the distinctive yellow color indicating singe mode fiber and blue connectors that are standard color UPC/LC.

 

So for $100 or similar, you get unlimited distance using a cable thats smaller diameter, and ligher let alone way more future proof, so once datacenters start getting rid of 40Gbps gear, you can reuse the LC-LC fiber cable for QSFP .. or you can use cheap non-temperature stabilized CWDM transcievers and mux/demux to reuse the same piece of cable you have going to your neighbour or garage, to run multiple 10Gbps links, or a combination of 1Gbps and 10Gbps links over a single cable.

 

If you have 10GBASE-T on your motherboard, and already have cat6 (40m spec) or cat6a (100m spec) cabling wired, and all you need is a switch. Great, go with the $250 asus. Otherwise SFP+ is probably cheaper.

 

Also, you can get switches or routers with more than 2x SFP+ ports from various sources, or you can build your own with a ton of ports of all kinds using an older super micro board and an older 40pcie lane xeon cpu, as some guy on youtube I once saw. 

Link to comment
Share on other sites

Link to post
Share on other sites

Was the video cut short ? 

IntelCorei54670k,Maximus VI Formula,Swift tech H220, 16gigs Corsair Dominator platinums, Asus DCUII GTX 780,1x256 840 evo, 1x 2TB Segate barracuda, Corsair AX 860, 

3 X Noctua NF-F12, 2x Noctua NF A-14, Ducky Shine 3 Blue Leds Blue switches, Razer Death Adder 2012, Corsair vengence 1400  

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Razor512 said:

 

Think of it like a PC hardware version of comcast.

1GbE sucks, but instead of putting some real effort into fixing it, the motherboard makers, and router makers, are just looking at us while doing this.

 

 

 

I have a 33.5TB storage server in my apartment, feeding a range of content, even 4K rips and Blu-Ray Remux's, to two HTPCs, plus I have a workstation and laptop for a five PCs total... And even I think you're nuts. o.O  The 1gbe network in my home just fine and I have some pretty extreme local network traffic for a residential apartment.  (When I say this, I mean that typical consumer network traffic is just 'Sharing the internet connection to all the computers' and little more.)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AshleyAshes said:

I have a 33.5TB storage server in my apartment, feeding a range of content, even 4K rips and Blu-Ray Remux's, to two HTPCs, plus I have a workstation and laptop for a five PCs total... And even I think you're nuts. o.O  The 1gbe network in my home just fine and I have some pretty extreme local network traffic for a residential apartment.  (When I say this, I mean that typical consumer network traffic is just 'Sharing the internet connection to all the computers' and little more.)

I have a few 10Gb ports going to my server and main desktop but I agree, most of the connections are fine with 1Gb connections up to the core switch in my house.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

https://routerboard.com/CRS226-24G-2SplusRM

 

Same price as the Asus, but with 24 Gigabit Ethernet ports and two 10 Gigabit SFP+ slots.

 

It's also a fully managed layer 2 and 3 switch (running RouterOS), the Asus is (surprisingly enough) completely unmanaged!

Main Linux rig: HP Elitebook 2560P (i5-2410M, 8 GB, Pop! OS)

Living room/couch gaming rig: AMD 5800X, Asus TUF Radeon 6900 XT, 32 GB, 65" LG C1 OLED

Home server and internet gateway: Dell Optiplex 3040 MFF (i5-6500T, 16 GB, Ubuntu Server 22.04 LTS)

Phone: Asus Zenfone 10

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, nicklmg said:

 

 

LOL I thought that thumbnail was Linus holding a laptop...

DUE TO MANY REQUESTS I HAVE LEFT THIS FORUM FOREVER.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, xnamkcor said:

How fast of a network would I need to keep up with these drive speeds?

 

iiryVmo.png

In your case, this shows the need for the consumer space to have been on 10GbE at least 10 years ago. Just like how we moved from basic 100Mbps Ethernet to 1GbE.

If these companies provided everyone with a proper Ethernet adapter without price gouging, 10GbE would be the standard. then for cases like yours, we would have the high end motherboards (the $300-$500 ones), coming with 40GbE, and ~$200 boards having 2 10GbE ports for teaming.

40GbE can still work over copper for short distances, thus decent for most homes.

 

Just like how the MSRP of a modern CPU that is 10 times faster than an older gen CPU, making an Ethernet adapter 10 times faster, does not carry a large price premium.

For CPU manufacturing, sites like electronics360 will sometimes show a manufacturing cost of less than $5 for a high end CPU, and the insane price is simply due to them recovering the R&D costs and turning a profit before the chip becomes outdated in a relatively short period of time, especially when competition is present. Since a CPU has a far shorter market life before sales drop significantly, they charge a lot over BOM + manufacturing.

 

On the other hand, for Ethernet, a specific standard can last through many generations of PC builds, and will continue to be used in some form for a very long time. e.g., we still have some laptops being released with 100Mbps Ethernet. If they moved to 10GbE 10 years ago, we would see it as standard in almost every mainstream PC, and budget switch, and then over the next 10 years, we will see it still show up in entry level laptops, and various other networked devices, as the consumer market transitions to 40GbE for the mainstream, then a few years later, the cycle will repeat as people begin to move towards 100GbE for their home networks.

 

Other than that, pushing for faster networking standards, benefits the consumers greatly, even toddler building his or her first PC. Software innovation comes after the hardware is able to handle it. For example, back in the days when a 14.4k modem was the standard, no company would have tried to develop a service like netflix to market to the consumers on a 14.4k internet connection.

 

If a networking company decides today to release cheap 40GbE equipment at a similar cost to 1GbE equipment, and within the month, you will see other companies announcing products in development to take advantage of that throughput.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I've been using a routerboard CRS210-8G-2S+IN for over a year and it is cheaper and was plug and play for me. Just bought SFP+ adapters and cables for server and desktop(<50$) and let my modem be the DHCP. Why has Linus never reviewed a microtik/routerboard product?

 

https://routerboard.com/CRS210-8G-2SplusIN

229$

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Razor512 said:

-Snip-

This has nothing to do with adapters and everything to do with the networking equipment to support the throughput. 40GbE works up to 7 meters, max, over passive copper cables which I wouldn't consider useful for any equipment that needs to be that close to my desktop. Getting 40Gb, let alone 100Gb, over large distances is a nightmare and a single transceiver that can support long distances can cost thousands of dollars. A single transceiver that can push 40Gb over a 12.5 mile single-mode fiber alone costs between $8K to $10K, the fiber itself probably costs twice as much, easily, and you'd need to run numerous ones per neighborhood you're covering. A single town or city could easily cost tens of millions of dollars to cover them alone, and that's just the optics and cable, not taking into account all the equipment needed, man power, etc.

 

Let's say 10 years ago people started pushing 10Gb, for whatever reason, 40Gb can go over that but 100Gb cannot, you need completely different infrastructure to support 40/100 compared to 10/40. We are just now coming out with products that can even take advantage of 10GbE. It's not as simple as saying "well if people starting doing X so many years ago we'd now have Y". No, it takes time and money to develop new things. 1Tbps is just barely being touched on in the lab and companies are always looking to innovate but you cannot just make something a standard and suddenly everything can use that. Things have to make sense and be practical to find real-world applications.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Razor512 said:

In your case, this shows the need for the consumer space to have been on 10GbE at least 10 years ago. Just like how we moved from basic 100Mbps Ethernet to 1GbE.

If these companies provided everyone with a proper Ethernet adapter without price gouging, 10GbE would be the standard. then for cases like yours, we would have the high end motherboards (the $300-$500 ones), coming with 40GbE, and ~$200 boards having 2 10GbE ports for teaming.

40GbE can still work over copper for short distances, thus decent for most homes.

 

Just like how the MSRP of a modern CPU that is 10 times faster than an older gen CPU, making an Ethernet adapter 10 times faster, does not carry a large price premium.

For CPU manufacturing, sites like electronics360 will sometimes show a manufacturing cost of less than $5 for a high end CPU, and the insane price is simply due to them recovering the R&D costs and turning a profit before the chip becomes outdated in a relatively short period of time, especially when competition is present. Since a CPU has a far shorter market life before sales drop significantly, they charge a lot over BOM + manufacturing.

 

On the other hand, for Ethernet, a specific standard can last through many generations of PC builds, and will continue to be used in some form for a very long time. e.g., we still have some laptops being released with 100Mbps Ethernet. If they moved to 10GbE 10 years ago, we would see it as standard in almost every mainstream PC, and budget switch, and then over the next 10 years, we will see it still show up in entry level laptops, and various other networked devices, as the consumer market transitions to 40GbE for the mainstream, then a few years later, the cycle will repeat as people begin to move towards 100GbE for their home networks.

 

Other than that, pushing for faster networking standards, benefits the consumers greatly, even toddler building his or her first PC. Software innovation comes after the hardware is able to handle it. For example, back in the days when a 14.4k modem was the standard, no company would have tried to develop a service like netflix to market to the consumers on a 14.4k internet connection.

 

If a networking company decides today to release cheap 40GbE equipment at a similar cost to 1GbE equipment, and within the month, you will see other companies announcing products in development to take advantage of that throughput.

 

 

Except 1 Gb is not "standard" at the consumer level right now. 10/100 is. You have to specifically buy gigabit routers and switches. Gb might be "obviously" the "minimum" "anybody" "should" use, and is the minimum for a tech enthusiast, but it's not "the norm" yet. It's a luxury right now on consumer equipment.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, xnamkcor said:

 

Except 1 Gb is not "standard" at the consumer level right now. 10/100 is. You have to specifically buy gigabit routers and switches. Gb might be "obviously" the "minimum" "anybody" "should" use, and is the minimum for a tech enthusiast, but it's not "the norm" yet. It's a luxury right now on consumer equipment.

Erm... This isn't true.  While you can still get very low end products with 100mbit, 1gbit is the norm on the majority of routers at the lowish-mid range price points or higher, on all desktop PCs and laptops, on all current game consoles, and so on. o.O

 

*checks*

Yeah, my smart TV and my Raspberry Pi are my only 'current' consumer products with 100mb.

Link to comment
Share on other sites

Link to post
Share on other sites

Ubiquiti has a 10G switch that's $600 now. It has 4 10Gbps Base-T and 12 SFP+ 10G ports.

 

While it is 3x more expensive than the Asus switch it does have 14 more 10Gbps ports.

 

 https://www.ubnt.com/edgemax/edgeswitch-16-xg/

 

https://www.bhphotovideo.com/c/product/1267265-REG/ubiquiti_networks_es_16_xg_edgeswitch_16_10g.html

Mein Führer... I CAN WALK !!

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, AshleyAshes said:

Erm... This isn't true.  While you can still get very low end products with 100mbit, 1gbit is the norm on the majority of routers at the lowish-mid range price points or higher, on all desktop PCs and laptops, on all current game consoles, and so on. o.O

 

*checks*

Yeah, my smart TV and my Raspberry Pi are my only 'current' consumer products with 100mb.

 

 

Certainly, your Collection of electronics is a perfectly cromulent sample of what is normal.

On that note, 4 bay NASs are now normal and CRTs are normal because I own 3.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Razor512 said:

In your case, this shows the need for the consumer space to have been on 10GbE at least 10 years ago. Just like how we moved from basic 100Mbps Ethernet to 1GbE

No, it doesn't.

It shows that he has equipment that could benefit from GbE and/or 10GbE under the right scenarios. But when it comes to use case, unless he's running a server that's constantly being hit for data (think web servers, multiple user video editing [like LTT's workflow], or large scale commercial VMs), those drives won't see much benefit other than maybe taking a second less to transfer large/multiple files.

6 hours ago, Razor512 said:

If a networking company decides today to release cheap 40GbE equipment at a similar cost to 1GbE equipment, and within the month, you will see other companies announcing products in development to take advantage of that throughput.

Except that won't happen for some time. 10GbE equipment is still expensive to manufacture compared to GbE, and home users still don't have a use case to justify it.

 

7 minutes ago, xnamkcor said:

 

Certainly, your Collection of electronics is a perfectly cromulent sample of what is normal.

On that note, 4 bay NASs are now normal and CRTs are normal because I own 3.

At this point, GbE is predominate on a huge deal of consumer equipment. GbE networking equipment is the norm, things like NAS solutions and GbE internet are not.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×