Jump to content

Pls sanity check my 10GBit setup before buying

Hi. We are planning to move our render clusters to a relatively cheap 10GBit network in order to alleviate some bottlenecks we are experiencing when sending off large render tasks to the clients (with 500GB of textures streaming to each client per frame). Basically we need to network ten clients to access one synology NAS (DS1817 with two 10GBASE-T-LAN-Ports) and are planning to buy these components to make it happen:

 

Netgear ProSAFE XS500M Desktop 10G Switches

One Intel X540-T2 10Gigabit 10GBe 10Gbit NIC per host machine (which we could get for 150 each)

and each of those clusters will be attached to one Synology DS1817 NAS on the 10GBASE Port.

 

Our goal is to move as closely as possible to the theoretical max read/write speeds of the Seagate Exos X X10 10TB drives (whioch should be in the 220 mb/s region) installed in those NASs.

Now, before pulling the trigger on those parts I wanted to ask if there is any royal screw-up in my thinking...

Any help is greatly appreciated!

Cheers, J

 

Link to post
Share on other sites

Looks fine to me, but unless you plan to use SFP+ between the switches, those are really just 4 port switches for your purposes. Depending on how things lay out, I would prefer to use one larger switch.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to post
Share on other sites

It looks good to me.

 

You can get Aquantia based network cards for around 80$... if there's drivers for the operating system of your render servers they should be fine : https://www.amazon.com/Aquantia-NIC-5-speed-Ethernet-Network/dp/B07C5VLVFF/

 

However, I would consider using fiber and SFP+ network cards for your servers, simply because network switches that use SFP+ ports are much cheaper, and same goes for network cards.

 

I realize your Synology has only copper RJ45 10gbps ports, but surely you could find a network switch with at least two copper RJ45 ports and a lot of SFP+ 10gbps ports.

 

If you want something new, maybe check out :

 

1. 539$ : Ubiquiti Networks US-16-XG-US 10G 16-Port Managed Aggregation Switch : 12 SFP+ 10g ports and 4 RJ 45 10g ports

 

For example of refurbished/used switches, look into a Dell PowerConnect 8024F which has 24 SFP+ ports and 4 RJ45 10 gbps ports (combo, shared with 4 SFP+, use either one of them)... you can find them refurbished at 600-1000$

SFP+ transceivers are under 30$ each here's some examples :

 

1. 29$ each Refurbished: Dell 10GB/s SFP+ 850nm Transceiver Module FTLX8571D3BCL $ 29.00

2. 29$ each new Intel X520-DA1/2 / SR1/2 Compatible 10G/1G Dual (10GBASE-SR & 1000BASE-SX) SFP+  $ 29.00

 

You can buy LC-LC cables to use with these transceivers for cheap, here's exampel of various lengths : https://www.amazon.com/dp/B07CXPH7NS/

 

But, you could also just get DAC cables, if the servers are reasonably close (let's say up to 3-5 meters).  For example, the same store with the links above has x520 for 110$ , and has combo deal for card + 2 DAC cables (short though) for 180$ ... so you could use one DAC cable per server... if you have 10 servers, buy 5 cards standalone and 5 cards with the 2 DAC cables combo and you have 10 cards and 10 DAC cables.

 

$108 (180$ with 2 DAC cables) : Refurbished: Intel OEM X520-DA2 10Gb/s Network Adapter ( 2 x 10g SFP+)

 

They also have cables sold separately (of various lengths), and you can buy these DAC cables in lots of other places.

 

You also have switches like Quanta LB6M with 24 SFP+ 10g ports and 4 RJ45 1gbps port for 300$, new : Quanta LB6M 24-Port 10GbE SFP+ 4x 1GbE L2/L3 Switch

They work great, the only downside is that they don't have a graphical / web interface, you can configure them only with commands in command prompt. But  you probably won't even need to configure much, maybe only if you want to do some port trunking (ex join 4 x 10gbps to link to another switch at 40gbps speed or something like that)

 

 

Link to post
Share on other sites

36 minutes ago, brwainer said:

Looks fine to me, but unless you plan to use SFP+ between the switches, those are really just 4 port switches for your purposes. Depending on how things lay out, I would prefer to use one larger switch.

quick question: Those should be 8 port switches, or am I not getting seomthing? See here: https://geizhals.at/netgear-prosafe-xs500m-desktop-10g-switch-xs508m-100-a1685835.html

Link to post
Share on other sites

Just now, wizzackr said:

quick question: Those should be 8 port switches, or am I not getting seomthing? See here: https://geizhals.at/netgear-prosafe-xs500m-desktop-10g-switch-xs508m-100-a1685835.html

You listed XS500M in your first email - that is a 5 port switch (4+1). The model you linked is actually the XS508M, but the seller makes it confusing in the listing.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to post
Share on other sites

14 minutes ago, brwainer said:

You listed XS500M in your first email - that is a 5 port switch (4+1). The model you linked is actually the XS508M, but the seller makes it confusing in the listing.

ah, ok. Sorry for that - just wanted to make sure this is not some wonky shared ports stuff.

Link to post
Share on other sites

XS508M-100 has 8 ports , 8 RJ45 or 7 RJ45 + 1 SFP+

XS505M-100 has 5 ports, 4 RJ45 and 1 SFP+

 

Your page claims to sell the 8 port model.

 

Considering you have 10 machines (so you need at minimum 11 ports), they don't seem like a good choice.

If you stack two of those, you'll end up with 14 ports (because you'll use at least one port from each to connect them together)

They're also unmanaged switches, with low bandwidth (100gbps)

For comparison, that ubiquity switch I suggested has a 320gbps switching capacity.

Link to post
Share on other sites

That looks much better indeed. Noise is a big concern for us though, as this will be close to where the artists work. And man have I heard some noise from rackmounted units... Will try finding an alternative with RJ45 cabling...

Link to post
Share on other sites

56 minutes ago, mariushm said:

However, I would consider using fiber and SFP+ network cards for your servers, simply because network switches that use SFP+ ports are much cheaper, and same goes for network cards.

 

Just from looking at it for a sec this is great advice - the price difference is jarring!

Link to post
Share on other sites

Once I posted something about how to reduce noise in HDDs, I'll put it here, maybe this information helps:

On 8/13/2019 at 1:13 PM, seagate_surfer said:

There is a "feature" to put it in a way that can be de-enabled to help reduce noise (PowerControl) but first a little background on why Seagate uses proprietary technology that can be disabled and why it is not disabled by default. In HDDs standards, there is one that's used as an automatic power-saving feature that is activated during very brief periods of command inactivity without impacting performance, it is called PowerTrim technology, but PowerChoice now called Power Control technology (a proprietary implementation of T10 Approved Standard No T10/09-054 and T13 Standard No T13/452-2008) complements PowerTrim technology by enabling even greater power reductions that cover idle periods greater than one second. The result? PowerChoice technology decreases drive power consumption by up to 54 percent in enterprise environments. For example, a 1U rack filled with twenty-four 500 GB Constellation drives that have entered PowerChoice technology C mode delivers 12 TB of storage, yet consumes only 43W or slightly more power than a single 40W light bulb! Delivering a combination of energy efficiency and user flexibility.

 

Now I introduce you SeaChest, to use this tool you have to be familiar with executing command lines or it will complicate everything a bit, the download link is here: https://www.seagate.com/support/software/seachest/ To learn more about PowerChoice (PowerControl) you can open the User Manual from the link above and go to the section:

  • 
    ================================
    How PowerChoice Technology Works
    ================================

I will now paste some screenshots of the command lines that will help you disable PowerControl and thus getting a lower noise from the HDD that support this: 

How to disable EPC (Extended Power Conditions)
Note: PowerChoice has now been replaced with PowerControl

  1. Download the latest SeaChest and install it in Windows.
  2. Next, confirm if EPC is supported and Enabled for the drive in question. First. get the drive's Handle which could be PD0, PD1, or PD2, etc. Copy and Paste or type in this command then Enter:
    • SeaChest_PowerControl --scan

    • image.png.669cba58a6dfcd825a711c1999147545.png

  3. Now copy and paste or type in this command then Enter.
    • SeaChest_PowerControl -d PD(N)

      • Replace (N) with the handle of the target disk

    • Scroll to Features Supported and look for EPC [Enabled]

    • image.png.1e4d682d5917dc009c2d983644c0803b.png

  4. To disable EPC on the drive Copy and paste or type this command then Enter:
    • SeaChest_PowerControl -d PD(N) --EPCfeature disable

    • Replace (N) with the handle of the target disk

    • Wait for a message stating EPC was disabled successfully

    • image.png.94c5c11dbd5462bde6c807a153138714.png

  5. Now we need to confirm that EPC is disabled. Repeat step 3, Copy and paste or type in this command then Enter
    • SeaChest_PowerControl -d PD(N)

      • Replace (N) with the handle of the target disk

    • Scroll to Features Supported and look for EPC

      • [Enabled] should no longer be seen

      • image.png.8e3c87878f227e4c354f37acfc2ba010.png

  6.  Shut down the computer then reinstall the drive into the NAS if it was the case.

If you type in Google disable epc site:seagate.com you will notice that many models show up because many models support the EPC and thus giving one more option to help you reduce noise from your machine.

 

 

Seagate Technology | Official Forums Team

IronWolf Drives for NAS Applications - SkyHawk Drives for Surveillance Applications - BarraCuda Drives for PC & Gaming

Link to post
Share on other sites

1 hour ago, seagate_surfer said:

Once I posted something about how to reduce noise in HDDs, I'll put it here, maybe this information helps:

 

Considering they need bandwidth here, slowing down the HDDs is a bad idea.

Also I'm pretty sure they were referring to fan noise from rack mounted switches, a known issue as its assumed you keep your racks in a server room/closet where noise is not an issue.

ASUS B650E-F GAMING WIFI + R7 7800X3D + 2x Corsair Vengeance 32GB DDR5-6000 CL30-36-36-76  + ASUS RTX 4090 TUF Gaming OC

Router:  Intel N100 (pfSense) Backup: GL.iNet GL-X3000/ Spitz AX Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz) WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz)
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~1200Mbit down, 115Mbit up, variable)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×