Jump to content

10G between App server & storage server?

Gerr

I have 2 servers in my home.  One is a Windows 2016 Server Essentials that is running all my apps.  It's my domain controller and runs Plex, Blue Iris(NVR), NAS, and central client backup.  Problem is it's a Lenovo TS140(Xeon version) which doesn't have a lot of room for hard drives.  Thus I created a 2nd server out of spare parts running FreeNas that houses all my HDD's and uses the ZFS file system.  I am in the process of building/setting these up and had a thought, since ALL connectivity to the FreeNas server is from my App server, why couldn't I do an ad-hoc 10G connection between the two servers and only connect the App server to my internal network via a 4-port Intel NIC I have?  Just buy a pair of used 10G SFP+ cards off of ebay and a single cable, which shouldn't total more than $100.  Thoughts?

Link to comment
Share on other sites

Link to post
Share on other sites

Sound like a good idea. What speeds are the nas capable of?

Current LTT F@H Rank: 90    Score: 2,503,680,659    Stats

Yes, I have 9 monitors.

My main PC (Hybrid Windows 10/Arch Linux):

OS: Arch Linux w/ XFCE DE (VFIO-Patched Kernel) as host OS, windows 10 as guest

CPU: Ryzen 9 3900X w/PBO on (6c 12t for host, 6c 12t for guest)

Cooler: Noctua NH-D15

Mobo: Asus X470-F Gaming

RAM: 32GB G-Skill Ripjaws V @ 3200MHz (12GB for host, 20GB for guest)

GPU: Guest: EVGA RTX 3070 FTW3 ULTRA Host: 2x Radeon HD 8470

PSU: EVGA G2 650W

SSDs: Guest: Samsung 850 evo 120 GB, Samsung 860 evo 1TB Host: Samsung 970 evo 500GB NVME

HDD: Guest: WD Caviar Blue 1 TB

Case: Fractal Design Define R5 Black w/ Tempered Glass Side Panel Upgrade

Other: White LED strip to illuminate the interior. Extra fractal intake fan for positive pressure.

 

unRAID server (Plex, Windows 10 VM, NAS, Duplicati, game servers):

OS: unRAID 6.11.2

CPU: Ryzen R7 2700x @ Stock

Cooler: Noctua NH-U9S

Mobo: Asus Prime X470-Pro

RAM: 16GB G-Skill Ripjaws V + 16GB Hyperx Fury Black @ stock

GPU: EVGA GTX 1080 FTW2

PSU: EVGA G3 850W

SSD: Samsung 970 evo NVME 250GB, Samsung 860 evo SATA 1TB 

HDDs: 4x HGST Dekstar NAS 4TB @ 7200RPM (3 data, 1 parity)

Case: Sillverstone GD08B

Other: Added 3x Noctua NF-F12 intake, 2x Noctua NF-A8 exhaust, Inatek 5 port USB 3.0 expansion card with usb 3.0 front panel header

Details: 12GB ram, GTX 1080, USB card passed through to windows 10 VM. VM's OS drive is the SATA SSD. Rest of resources are for Plex, Duplicati, Spaghettidetective, Nextcloud, and game servers.

Link to comment
Share on other sites

Link to post
Share on other sites

FreeNas is only a G4400 on a H170 mobo with 32GB of DDR4, but housed in a 4U case with 12 drive bays and a LSI 8-port HBA card...

Plex storage - 3x 3TB HGST NAS in Raidz1

NAS storage - 4x 3TB WD Red in ZFS stripped-mirrors(their Raid10 equiv)

Client backup - 2x4TB HGST NAS (mirror)

NVR - 3TB WD Purple

Various SSD's used as read/write cache, not sure on setup yet.

 

Not looking for a super fast connection, just want to make sure that the multiple camera feeds and plex streams to the app server that it has plenty of bandwidth to hit the storage server with causing a slow down.

Link to comment
Share on other sites

Link to post
Share on other sites

What 10G nics would you recommend, one for the Windows server and one for the FreeNas server?

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Gerr said:

What 10G nics would you recommend, one for the Windows server and one for the FreeNas server?

How much do you want to spend. those cheap mellonax 10gbe's are about 20 bucks and should work fine

 

What are you using for protocol? Id go nfs or iscsi and cifs will cpu limit you.

Link to comment
Share on other sites

Link to post
Share on other sites

Have you monitored the network currently?  I don't feel as though a solid 1gb/s connection will hinder anything.

 

also, if the connection to your network is limited to 1gb/s from the app server then there's truly little benefit. 

 

I have multiple servers, a nas, and 4 systems that are all very active on the network, everything is great at gigabit speeds. The only thing with more speed is the nas, it is on an aggregated port (2gb) which helps with large file transfers from multiple systems at the same time (not very often).

Link to comment
Share on other sites

Link to post
Share on other sites

@Gerr You are probably looking at Mellanox connectx-2 cards. They sell for ~$20 on ebay. I know they have windows drivers, but the linux drivers have been flaky from my experience. They work pretty well on Ubuntu, but I'm not sure about freenas. You could get an intel card for the freenas box, and have almost guaranteed driver support. 

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

I would recommend checking current network utilization before increasing bandwidth. While it would be still worthwhile it's always better to use a a design thinking approach to scalabilty and also helps you understand your environment better.  If you have available NIC ports on your 4 port NIC you can always use bonding to increase bandwidth.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/24/2016 at 3:47 AM, PCGeek said:

I would recommend checking current network utilization before increasing bandwidth. While it would be still worthwhile it's always better to use a a design thinking approach to scalabilty and also helps you understand your environment better.  If you have available NIC ports on your 4 port NIC you can always use bonding to increase bandwidth.

While I certainly agree with you, I doubt he'd see any performance increased bonding between his app/storage systems over a 4 port (almost certainly unmanaged),  

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Dark said:

While I certainly agree with you, I doubt he'd see any performance increased bonding between his app/storage systems over a 4 port (almost certainly unmanaged),  

Agreed.  There is prerequisites for bonding (i.e. LACP support, etc) and added complexity as well.  It's just an idea for using a lean design before investing in 10Gbe infrastructure.  

Link to comment
Share on other sites

Link to post
Share on other sites

CCNP here, think I don't know how to or don't have the proper equipment to do it?

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, PCGeek said:

Agreed.  There is prerequisites for bonding (i.e. LACP support, etc) and added complexity as well.  It's just an idea for using a lean design before investing in 10Gbe infrastructure.  

You can do static port bonding as well, LACP is just nicer. Problem with bonding/aggregation is it works best when there are many end points as a single session only ever takes one path, so having 2 or 4 ports bonded is effectively only 1 port for a single client. Aggregation between switches works very well since it meets the requirements of many different src/dst hashes very well.

 

Aggregation for virtual hosts also works well, where it tends to not work well is between a single application/web server and a database server. 10Gbe is the easy button, it will work not matter what, it will give more bandwidth than 1Gbe.

 

Totally agree with your point though, analyse network traffic with something like Cacti and aggregate ports first before spending money on equipment since you might not need to. 

Link to comment
Share on other sites

Link to post
Share on other sites

4-port NIC teaming from managed switch to app server allowing multiple connections, single 10G link from app server to storage server to allow a faster single connection.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Gerr said:

CCNP here, think I don't know how to or don't have the proper equipment to do it?

Maybe your profile pic wasn't clear enough ;).

 

FYI I do what you are planning, direct connection between server and desktop using Intel X540-T1 cards. Mellanox cards are way cheaper then these though.

Link to comment
Share on other sites

Link to post
Share on other sites

I heard FreeNas is a little picky with Mellanox cards.  Though for the price, a pair of those would be ideal.  I don't need to run FreeNas, but do want ZFS.  Problem is I am a network engineer, not a server person, so still in the newb ranks when it comes to servers.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Gerr said:

I heard FreeNas is a little picky with Mellanox cards.  Though for the price, a pair of those would be ideal.  I don't need to run FreeNas, but do want ZFS.  Problem is I am a network engineer, not a server person, so still in the newb ranks when it comes to servers.

For the price it can't hurt to give them a try, if that fails just use Ubuntu server + ZFS.

 

The cheapest Intel cards, X520/X540, are around $70USD + shipping each so more than you wanted to spend?

 

Edit:

Plus DA SFP+ cable if using that and not 10GBase-T (X540-T1/2)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×