Jump to content

Network Upgrade Questions

(Just want to indicate that this is a repost from the FreeNAS forums where I initially put this. I didn't get any replies there, and besides, I prefer LTT forums :P

Also, the indentation and line breaks might look a little weird. Line wrap wasn't working for whatever reason. And the links are huge.)

 

Hi FreeNAS users,
So I've owned a FreeNAS nas for a few months now, with the following specs:

 
i5-760
8GB RAM
1x, yes, you heard me right, just one WD Caviar SE 750GB HDD
Biostar TH55HD
Integrated Gigabit LAN on my home network
As you can probably imagine from these specs, I am itching for an upgrade, since I'm tired of transferring files at less than 5 Mb/s.
I've decided to look into major makeovers for my NAS setup, but I have a lot of questions that I haven't gotten satisfactory answers from Google for. 
The major upgrades I'm looking at are to move to 10GbE networking, switch to compact rackmount servers, 
and a storage upgrade (though I'm considering doing that later, I just want to focus on the networking for the time being).

I've been planning my network setup a little and you can see how I'm envisioning it here:

So here are my questions (don't feel obliged to answer all of them, just reply if you have anything to add.)

1.  For 10GbE networking, is it better to use CX4 or SFP+ gear? I'm finding 10GbE CX4 switches to be a lot cheaper per port than 
SFP+ ones (http://www.ebay.com/itm/202146182153 vs http://www.ebay.com/itm/323032634858), but I'm also finding that CX4 ones
 typically lack normal RJ45 ports alongside them. While I have a gigabit switch right now, being able to manage all my networking in one switch 
would be nice. The other thing I'm noticing is that SFP+/SFP NICs and cables are a lot cheaper than their CX4 counterparts. So which one is probably better?

2. Also on the topic of switches, I was initially going to go with a 2-port card hooked up P2P with my 2 workstations in the diagram, but I decided to go with a switch for scalability. 
My initial concern was the price, since I had heard that they were exorbitantly expensive, so I didn't even check. But I'm finding switches like the ones I linked for under $100.
 Is there a catch or are these legit switches? Also, can my network be laid out like they would be if I went with P2P where I have my 10Gbe stuff on a different subnet, 
communicating on their own or will it interfere with the gigabit internet stuff?

3. Last question about switches. Do I need uplinks? Are they necessary?

4. My current NAS is in a Micro ATX box that is really suited better for an HTPC than a NAS, since it only has two drive bays. 
I've decided to move to rackmount stuff for greater storage expansion and more compactness (though noise might be an issue for me).
 I don't want to buy a prebuilt server, since I'm a little disappointed with the prices and options. Instead, I'm thinking of getting a Chenbro or 
similar 1u or 2u case with 4 or more drive bays and buying a used Supermicro 1156 motherboard, and hook up an LSI HBA to the backplane, 
since it'd suit my needs a lot better (most rackmount servers aren't designed for NAS use from what I see, and the ones that do are way out of my budget). 
Is this a good idea or should I buy prebuilt servers?

5. When I do replace my 750GB HDD with a bigger array, how should I safely transfer the ZFS datasets over? Does FreeNAS have a good way to do this?

6. On most servers, what's the purpose of the management port? How does one make use of it? Do you need specialized management consoles? 
People on the internet seem to be suggesting that you do. How does usage differ between RS232 and Ethernet management ports? Is IPMI related,
 and how do you take advantage of that? On ethernet management ports, normal and IPMI, do you just hook it up to a switch? Will that work?

7. Is SMB3 required to do 10Gbe? Is throughput reduced when SMB3 is not used? Is there a way within FreeNAS for the non-SMB3 clients I have to connect to, 
even if the speed isn't great, while still not limiting the throughput of modern clients?

8. Building on the SMB3 thing, in the diagram, you might see a server called "iSCSI to SMB bridge server". This is an idea I came up with to address the
 SMB3 thing, if it even is an issue, but I'm realizing it might not be as good of an idea as I initially thought. The idea was simply to set up iSCSI 
targets on the main NAS, and to mount those iSCSI shares on a separate server. Then, I would have the iSCSI shares shared on an SMB network with my 
retro machines using P2P gigabit connections and dirt cheap nics. But after doing some research on iSCSI, it seems like it wouldn't be possible to use my
 previously existing ZFS datasets (shares) as iSCSI targets without overwriting them. This obviously can't work, I need to have something available on 
SMB and iSCSI and I need the retro and modern clients to be able to access and write to the same stuff. Besides, it seemed a little overcomplicated 
and adds extra cost. Does anyone know of a different way to do this?

I think that's everything. I still have yet to budget everything out, since I'm not buying until later this summer. Everything should fall within my max restriction 
of about $1000 for everything excluding the hard drives if I buy used gear on ebay. I might edit the post if I come up with other questions.

Thanks in advance!

 

 

SetupPlan.jpg

Spoiler

My main desktop, "Rufus":

Spoiler

PC Specs:

CPU: AMD Ryzen 5 1600

CPU Cooler: Cooler Master MasterLiquid Lite 120

RAM: 2x8gb Corsair Vengence DDR4 Red LED @ 3066mt/s

Motherboard: MSI B350 Gaming Pro Carbon

GPU: XFX RX 580 GTR XXX White 

Storage: Mushkin ECO3 256GB SATA3 SSD + Some hitachi thing

PSU: Seasonic Focus Plus Gold 650W

Case: Corsair Crystal 460X

OS: Windows 10 x64 Pro Version 1607

Retro machine:

Spoiler

PC Specs:

CPU: Intel Core 2 Quad Q9550

CPU Cooler: Stock heatsink

RAM: GSkill 4gb DDR2 1066mt/s

Motherboard: Asus P5n-e SLI

GPU: 8800 GTS 640mb, I swap between that and my 8800 GTS 512mb

Storage: Seagate 320gb right from 2006

PSU: Ultra 600W 

Case: Deepcool Tesseract SW

OS: Windows XP SP3 32-bit, Linux Mint 18.2 Cinnamon 64-bit, Manjaro Deepin x64 (sorta)

Mac Pro Early 2008: Dual Xeon X5482s w/ 32GB RAM & HD 5770 running macOS High Sierra

More PC's

 

Link to comment
Share on other sites

Link to post
Share on other sites

Did I read right you said 5Mb/s as in 5 megabits per second? With a 1Gb/s wireless AC connection I can easily get around 35 MB/s transfer speeds on my network. A 1Gb/s connection in theory should give you 120 MB/s transfer speeds. If 1Gb isn't enough you could run more 1Gb cables and use Link Aggregation...just make sure your switch will support it.

 

If you're really stuck on the 10Gb network though you could run a point to point from your NAS to your main PC like in this video...That would be the most affordable way.

 

 

As for a rack mount server...yeahh...That is kind of how I became a member on this forum. If you look at this post by Windows7ge there is a server actually spec'd out for a $1000 budget.

 

The issue you'll run into is finding a case for your budget. Unless you want to go with something like a cheap Rosewill which is I believe under $200.

 

 

Uplink ports BTW are only necessary if you're connecting networks together so until you need them, don't worry about them.

 

The management port on a server allows a user to perform all the functions on a server remotely. Even low level things like Powering it on and off, changing things in the BIOS or RAID configuration and the like. It is very useful and convenient. Especially if the server is really far away somewhere and you need to access it. So not entirely necessary if the server is within walking distance inside your house...

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1. I wouldn't use the word better rather what suits the application but going SFP+ with fiberoptics seems to be the most flexible cheaper option if you know where to buy the parts. Finding a SFP+ switch with enough ports is dependent on how many 10Gbit hosts you want.

 

2. They are legitimate gear. It's most often retired server equipment.

 

What you would do is setup what are called VLANs on the switch. You could then dedicate ports to a 10Gbit network and keep the Gbit ports separate this would stop potential communication problems between two networks like if you wanted to use two DHCP servers. One for the 1Gbit and one for the 10Gbit networks.

 

3. Where did you read this? Uplink can apply to any number of functions or services.

 

4. If you just want a system that works for as low as possible buying a retired bare-bones supermicro server off ebay and populating your own sockets/ram/expansion cards is a viable option if that's the way you want to go. It's often cheaper than going full custom. Just make sure it can handle everything you want to plug into it.

 

5. I actually can't answer that one but it should be doable with probably just one Shell command.

 

6. You're thinking of Console ports when it's RS232. The Management port on a server includes the IPMI or IDRAC for Dell servers. These are attached to a tiny onboard computer with it's own CPU/memory. It has access to every temperature sensor on the system and often records system events like power on/power off/ over temp/ low or high voltage rails/ motherboard battery/ and it allows you to remote into the machine even when it is powered off. Which is nice for when the server locks up you don't have to be physically in front of it to hard reset it. It has many other functions as well. Some IPMI require software but some vendors have it built into a WebUI which allows you to navigate to the IPMI interface IP address on your local network, login, and have access to the before mentioned functions.

 

7. SMB (3 is just the version) is a Microsoft developed network file sharing protocol. Linux and Mac should have some limited support for the protocol but FreeNAS has protocols built in for creating *NIX shares (NFS), and shares for apple computers (AFP). These are shares that would have to be setup and their services enabled but like I mentioned before they should have some support for SMB according to what I've read.

 

As for 10Gbit. SMB3 isn't in itself what is needed. That's just the protocol Microsoft uses to move files across a network. Now SMB3.0 has a feature called Multichannel which allows you to aggregate multiple physical links to move files much faster. I just finished testing this on my own FreeNAS server aggregating two 10Gbit interfaces and it worked I saw transfer speeds around 1.5GBps(15Gbit). So unless you're aggregating the links just leave the SMB3 settings on default it'll work fine. You might have to setup the other share protocols for cross platform support but they should all be able to share the 10Gbit interface.

 

8. I don't have a deep enough understanding of iSCSI to help with that. These old machines don't support the SMB protocol? You can set the service to use older versions of SMB if these devices just don't support the latest.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Razor02097 said:

Did I read right you said 5Mb/s as in 5 megabits per second? With a 1Gb/s wireless AC connection I can easily get around 35 MB/s transfer speeds on my network. A 1Gb/s connection in theory should give you 120 MB/s transfer speeds. If 1Gb isn't enough you could run more 1Gb cables and use Link Aggregation...just make sure your switch will support it.

 

If you're really stuck on the 10Gb network though you could run a point to point from your NAS to your main PC like in this video...That would be the most affordable way.

 

 

As for a rack mount server...yeahh...That is kind of how I became a member on this forum. If you look at this post by Windows7ge there is a server actually spec'd out for a $1000 budget.

 

The issue you'll run into is finding a case for your budget. Unless you want to go with something like a cheap Rosewill which is I believe under $200.

 

 

Uplink ports BTW are only necessary if you're connecting networks together so until you need them, don't worry about them.

 

The management port on a server allows a user to perform all the functions on a server remotely. Even low level things like Powering it on and off, changing things in the BIOS or RAID configuration and the like. It is very useful and convenient. Especially if the server is really far away somewhere and you need to access it. So not entirely necessary if the server is within walking distance inside your house...

Only slightly off... 5 megabytes per second, not megabits. Still pitiful.  And I know it's my network bottlenecking it because I initially set it up at my dad's house, and there, I wasn't going through my stupid powerline adapters. I guess I'm ok with sticking to gigabit and just going P2P, as I probably won't reach that limit with my storage, but I'd really like to make that jump to 10gbe for the future. Also, like I said,  I was considering going P2P but I really couldn't find 2-port nics that properly matched up with their 1-port nic counterparts, and I found that 2-port nics were so much more expensive that a switch doesn't even add on that much extra cost. Besides, getting a 4-port switch like the one I linked allows me to add on an extra connection if I need.

 

Also, rackmount servers don't always have to be that expensive. You should keep in mind that the specs in that server are way more then what I'd be getting, which would probably look something like this:

CPU: Xeon X3440 | $30 on AliExpress
RAM: 16GB DDR3 ECC Registered | ~$50 on Ebay: http://www.ebay.com/itm/282787278869
MOBO: Supermicro X8SIL-F or similar | $40 on Ebay: http://www.ebay.com/itm/263047994445
STORAGE CONTROLLER: LSI-9211-8i or similar | ~$50 on Ebay: http://www.ebay.com/itm/152937435505
CHASSIS w/ PSU and backplane: Supermicro CSE-822 or similar | ~130 on Ebay: http://www.ebay.com/itm/222658991375
(Just imagine it had sas connectors for the backplane. I've seen ones that do in the price range, just couldn't find one right now.)

 That totals to about $300-$350, which leaves a good $700 for my network. I'm confident on my budget for this one.

 

Thanks for the info on the uplink and management ports.

 

The last thing I'd still like to have a solution for is the backwards compatibility problem. Having to get another used server wouldn't be that bad.. but it'd be really nice if I could avoid it.

Spoiler

My main desktop, "Rufus":

Spoiler

PC Specs:

CPU: AMD Ryzen 5 1600

CPU Cooler: Cooler Master MasterLiquid Lite 120

RAM: 2x8gb Corsair Vengence DDR4 Red LED @ 3066mt/s

Motherboard: MSI B350 Gaming Pro Carbon

GPU: XFX RX 580 GTR XXX White 

Storage: Mushkin ECO3 256GB SATA3 SSD + Some hitachi thing

PSU: Seasonic Focus Plus Gold 650W

Case: Corsair Crystal 460X

OS: Windows 10 x64 Pro Version 1607

Retro machine:

Spoiler

PC Specs:

CPU: Intel Core 2 Quad Q9550

CPU Cooler: Stock heatsink

RAM: GSkill 4gb DDR2 1066mt/s

Motherboard: Asus P5n-e SLI

GPU: 8800 GTS 640mb, I swap between that and my 8800 GTS 512mb

Storage: Seagate 320gb right from 2006

PSU: Ultra 600W 

Case: Deepcool Tesseract SW

OS: Windows XP SP3 32-bit, Linux Mint 18.2 Cinnamon 64-bit, Manjaro Deepin x64 (sorta)

Mac Pro Early 2008: Dual Xeon X5482s w/ 32GB RAM & HD 5770 running macOS High Sierra

More PC's

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Windows7ge said:

 

3. Where did you read this? Uplink can apply to any number of functions or services.

 

Let me revise what I said. Uplink ports generally connect the switch to other networking equipment. Still not generally necessary to a small network.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Windows7ge said:

1. I wouldn't use the word better rather what suits the application but going SFP+ with fiberoptics seems to be the most flexible cheaper option if you know where to buy the parts. Finding a SFP+ switch with enough ports is dependent on how many 10Gbit hosts you want.

 

2. They are legitimate gear. It's most often retired server equipment.

 

What you would do is setup what are called VLANs on the switch. You could then dedicate ports to a 10Gbit network and keep the Gbit ports separate this would stop potential communication problems between two networks like if you wanted to use two DHCP servers. One for the 1Gbit and one for the 10Gbit networks.

 

3. Where did you read this? Uplink can apply to any number of functions or services.

 

4. If you just want a system that works for as low as possible buying a retired bare-bones supermicro server off ebay and populating your own sockets/ram/expansion cards is a viable option if that's the way you want to go. It's often cheaper than going full custom. Just make sure it can handle everything you want to plug into it.

 

5. I actually can't answer that one but it should be doable with probably just one Shell command.

 

6. You're thinking of Console ports when it's RS232. The Management port on a server includes the IPMI or IDRAC for Dell servers. These are attached to a tiny onboard computer with it's own CPU/memory. It has access to every temperature sensor on the system and often records system events like power on/power off/ over temp/ low or high voltage rails/ motherboard battery/ and it allows you to remote into the machine even when it is powered off. Which is nice for when the server locks up you don't have to be physically in front of it to hard reset it. It has many other functions as well. Some IPMI require software but some vendors have it built into a WebUI which allows you to navigate to the IPMI interface IP address on your local network, login, and have access to the before mentioned functions.

 

7. SMB (3 is just the version) is a Microsoft developed network file sharing protocol. Linux and Mac should have some limited support for the protocol but FreeNAS has protocols built in for creating *NIX shares (NFS), and shares for apple computers (AFP). These are shares that would have to be setup and their services enabled but like I mentioned before they should have some support for SMB according to what I've read.

 

As for 10Gbit. SMB3 isn't in itself what is needed. That's just the protocol Microsoft uses to move files across a network. Now SMB3.0 has a feature called Multichannel which allows you to aggregate multiple physical links to move files much faster. I just finished testing this on my own FreeNAS server aggregating two 10Gbit interfaces and it worked I saw transfer speeds around 1.5GBps(15Gbit). So unless you're aggregating the links just leave the SMB3 settings on default it'll work fine. You might have to setup the other share protocols for cross platform support but they should all be able to share the 10Gbit interface.

 

8. I don't have a deep enough understanding of iSCSI to help with that. These old machines don't support the SMB protocol? You can set the service to use older versions of SMB if these devices just don't support the latest.

Thanks for the info. I'll probably go with the SFP+ switch I linked (or something like it).

 

What I mean when I say SMB3 is specifically SMB version 3, and nothing else. The old machines obviously support SMB, hell, SMB support actually reaches back to Windows NT 4.0, the version of SMB for which FreeNAS even supports (lol). They just don't run that version of SMB. My question is if the new protocols and features brought with SMB3 like RDMA or multichannel are necessary for making full use of the 10gbe link. In one of LTT's many 10GbE upgrade videos he mentioned SMB3 improving his results - so I wondered if I needed it too. Also, I just realized this - wouldn't each client just use the maximum SMB protocol available to connect? Do I even need to set a minimum protocol on FreeNAS to have that access?


And yea, I've used NFS for my Linux machines in the past. That works quite well so I will continue to use that for Linux, but for most of my machines, they run Windows, so SMB is the way to go for me.

 

EDIT: What I meant by the vague question "are they legit switches" was more along the lines of "do I need anything special other than the power cable to get them to work," since I trust big server brands like HP or dell to make good switches regardless. The price just seemed a little fishy, as though it needed some super expensive companion component to operate. I guess not.

 

EDIT 2: Oh man, how did I miss this.. You're absolutely right, Supermicro barebones seem to be the way to go. This barebones already has the motherboard I want, the X8SIL-F. The only annoyance is that it's 1u and only supports 1 PCIE card, and 4 drives. That's not the end of the world though, and besides, there probably exist 2u models for something similar. Thanks for the suggestion again.

Spoiler

My main desktop, "Rufus":

Spoiler

PC Specs:

CPU: AMD Ryzen 5 1600

CPU Cooler: Cooler Master MasterLiquid Lite 120

RAM: 2x8gb Corsair Vengence DDR4 Red LED @ 3066mt/s

Motherboard: MSI B350 Gaming Pro Carbon

GPU: XFX RX 580 GTR XXX White 

Storage: Mushkin ECO3 256GB SATA3 SSD + Some hitachi thing

PSU: Seasonic Focus Plus Gold 650W

Case: Corsair Crystal 460X

OS: Windows 10 x64 Pro Version 1607

Retro machine:

Spoiler

PC Specs:

CPU: Intel Core 2 Quad Q9550

CPU Cooler: Stock heatsink

RAM: GSkill 4gb DDR2 1066mt/s

Motherboard: Asus P5n-e SLI

GPU: 8800 GTS 640mb, I swap between that and my 8800 GTS 512mb

Storage: Seagate 320gb right from 2006

PSU: Ultra 600W 

Case: Deepcool Tesseract SW

OS: Windows XP SP3 32-bit, Linux Mint 18.2 Cinnamon 64-bit, Manjaro Deepin x64 (sorta)

Mac Pro Early 2008: Dual Xeon X5482s w/ 32GB RAM & HD 5770 running macOS High Sierra

More PC's

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, panther420 said:

Thanks for the info. I'll probably go with the SFP+ switch I linked (or something like it).

 

What I mean when I say SMB3 is specifically SMB version 3, and nothing else. The old machines obviously support SMB, hell, SMB support actually reaches back to Windows NT 4.0, the version of SMB for which FreeNAS even supports (lol). They just don't run that version of SMB. My question is if the new protocols and features brought with SMB3 like RDMA or multichannel are necessary for making full use of the 10gbe link. In one of LTT's many 10GbE upgrade videos he mentioned SMB3 improving his results - so I wondered if I needed it too. Also, I just realized this - wouldn't each client just use the maximum SMB protocol available to connect? Do I even need to set a minimum protocol on FreeNAS to have that access?


And yea, I've used NFS for my Linux machines in the past. That works quite well so I will continue to use that for Linux, but for most of my machines, they run Windows, so SMB is the way to go for me.

 

EDIT: What I meant by the vague question "are they legit switches" was more along the lines of "do I need anything special other than the power cable to get them to work," since I trust big server brands like HP or dell to make good switches regardless. The price just seemed a little fishy, as though it needed some super expensive companion component to operate. I guess not.

 

EDIT 2: Oh man, how did I miss this.. You're absolutely right, Supermicro barebones seem to be the way to go. This barebones already has the motherboard I want, the X8SIL-F. The only annoyance is that it's 1u and only supports 1 PCIE card, and 4 drives. That's not the end of the world though, and besides, there probably exist 2u models for something similar. Thanks for the suggestion again.

That's a fair point. They'll use the highest protocol they both support. I don't think you have much to worry about. Unless your goal is to absolutely saturate a 10Gbit link I think it'll work fine.

 

You will need SFP+ Transceivers. Or 10Gbit DAC cables. And the switches power cable. Other than those you shouldn't need anything else. Though their asking price makes me wonder if something's wrong with it.

 

And yeah the barebones are a "cheap" way of getting a server off the ground and running. I personally like the full custom approach more but it's a lot more costly.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, panther420 said:

For 10GbE networking, is it better to use CX4 or SFP+ gear?

SFP+

 

21 hours ago, panther420 said:

Also, can my network be laid out like they would be if I went with P2P where I have my 10Gbe stuff on a different subnet,  communicating on their own or will it interfere with the gigabit internet stuff?

You can have a P2P storage network and a separate 1Gb standard access network working just fine, that's how I do it currently.

 

21 hours ago, panther420 said:

Last question about switches. Do I need uplinks? Are they necessary?

No, that's actually a more generic term. Some switches have faster interfaces than the rest and are labeled uplinks but they don't really mean anything. For a managed switch you can configure any port to operate the way you want, for an unmanaged switch there is no different in port operation for any port (just potentially line speed).

 

21 hours ago, panther420 said:

Is SMB3 required to do 10Gbe?

No

 

21 hours ago, panther420 said:

Is throughput reduced when SMB3 is not used?

SMB2 and SMB3 are both fast enough for 10Gb, though I think you might actually be meaning SMB3 Multichannel? If so think of that more as gaining bandwidth not losing it if not in use.

 

For RDMA that requires support on both ends of the links, the NICs need to support it and those that do for 10Gb are rare and more expensive. It's not worth the hassle and you won't notice a difference unless you have extreme low latency requirements.

 

Also wow the post was LOOOOONG.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

SFP+

 

You can have a P2P storage network and a separate 1Gb standard access network working just fine, that's how I do it currently.

 

No, that's actually a more generic term. Some switches have faster interfaces than the rest and are labeled uplinks but they don't really mean anything. For a managed switch you can configure any port to operate the way you want, for an unmanaged switch there is no different in port operation for any port (just potentially line speed).

 

No

 

SMB2 and SMB3 are both fast enough for 10Gb, though I think you might actually be meaning SMB3 Multichannel? If so think of that more as gaining bandwidth not losing it if not in use.

 

For RDMA that requires support on both ends of the linksm the NICs need to support it and those that for for 10Gb are rare and more expensive. It's not worth the hassle and you won't notice a difference unless you have extreme low latency requirements.

 

Also wow the post was LOOOOONG.

Ok thanks, I can never have too much information!

 

Yeah I write long posts. If you look in my postings you'll see that I have a history of doing so. It's how I prefer to write, since it's thorough and gives all information that needs to be given in one post. It's one of my personal pet peeves when someone asks a question but doesn't give enough information like "Should I get a r5 1600 or r7 1700?" with little accompanying information in the body of the post.

Spoiler

My main desktop, "Rufus":

Spoiler

PC Specs:

CPU: AMD Ryzen 5 1600

CPU Cooler: Cooler Master MasterLiquid Lite 120

RAM: 2x8gb Corsair Vengence DDR4 Red LED @ 3066mt/s

Motherboard: MSI B350 Gaming Pro Carbon

GPU: XFX RX 580 GTR XXX White 

Storage: Mushkin ECO3 256GB SATA3 SSD + Some hitachi thing

PSU: Seasonic Focus Plus Gold 650W

Case: Corsair Crystal 460X

OS: Windows 10 x64 Pro Version 1607

Retro machine:

Spoiler

PC Specs:

CPU: Intel Core 2 Quad Q9550

CPU Cooler: Stock heatsink

RAM: GSkill 4gb DDR2 1066mt/s

Motherboard: Asus P5n-e SLI

GPU: 8800 GTS 640mb, I swap between that and my 8800 GTS 512mb

Storage: Seagate 320gb right from 2006

PSU: Ultra 600W 

Case: Deepcool Tesseract SW

OS: Windows XP SP3 32-bit, Linux Mint 18.2 Cinnamon 64-bit, Manjaro Deepin x64 (sorta)

Mac Pro Early 2008: Dual Xeon X5482s w/ 32GB RAM & HD 5770 running macOS High Sierra

More PC's

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Windows7ge said:

That's a fair point. They'll use the highest protocol they both support. I don't think you have much to worry about. Unless your goal is to absolutely saturate a 10Gbit link I think it'll work fine.

 

You will need SFP+ Transceivers. Or 10Gbit DAC cables. And the switches power cable. Other than those you shouldn't need anything else. Though their asking price makes me wonder if something's wrong with it.

 

And yeah the barebones are a "cheap" way of getting a server off the ground and running. I personally like the full custom approach more but it's a lot more costly.

Ok, I'm glad to hear that SMB should do that automatically.

 

Yeah, I'm not too familiar with the SFP cables and how they work. I think I'll probably be doing the transceivers, but I'm not entirely sure. I'm also a little suspicious of that asking price. $99?? It says used, there doesn't seem to be anything wrong with it...

 

Oh yeah and you should check the post. I just added back something kind of important, my setup diagram. I forgot to copy it over when I pasted my post from the FreeNAS forums.

Spoiler

My main desktop, "Rufus":

Spoiler

PC Specs:

CPU: AMD Ryzen 5 1600

CPU Cooler: Cooler Master MasterLiquid Lite 120

RAM: 2x8gb Corsair Vengence DDR4 Red LED @ 3066mt/s

Motherboard: MSI B350 Gaming Pro Carbon

GPU: XFX RX 580 GTR XXX White 

Storage: Mushkin ECO3 256GB SATA3 SSD + Some hitachi thing

PSU: Seasonic Focus Plus Gold 650W

Case: Corsair Crystal 460X

OS: Windows 10 x64 Pro Version 1607

Retro machine:

Spoiler

PC Specs:

CPU: Intel Core 2 Quad Q9550

CPU Cooler: Stock heatsink

RAM: GSkill 4gb DDR2 1066mt/s

Motherboard: Asus P5n-e SLI

GPU: 8800 GTS 640mb, I swap between that and my 8800 GTS 512mb

Storage: Seagate 320gb right from 2006

PSU: Ultra 600W 

Case: Deepcool Tesseract SW

OS: Windows XP SP3 32-bit, Linux Mint 18.2 Cinnamon 64-bit, Manjaro Deepin x64 (sorta)

Mac Pro Early 2008: Dual Xeon X5482s w/ 32GB RAM & HD 5770 running macOS High Sierra

More PC's

 

Link to comment
Share on other sites

Link to post
Share on other sites

If you are going to use SFP+ to connect equipment in close proximity, DAC cables seem like a good option.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Razor02097 said:

If you are going to use SFP+ to connect equipment in close proximity, DAC cables seem like a good option.

They should be pretty close to each other so yeah, I'll probably use DAC cables. Are they any cheaper out of curiosity?

Spoiler

My main desktop, "Rufus":

Spoiler

PC Specs:

CPU: AMD Ryzen 5 1600

CPU Cooler: Cooler Master MasterLiquid Lite 120

RAM: 2x8gb Corsair Vengence DDR4 Red LED @ 3066mt/s

Motherboard: MSI B350 Gaming Pro Carbon

GPU: XFX RX 580 GTR XXX White 

Storage: Mushkin ECO3 256GB SATA3 SSD + Some hitachi thing

PSU: Seasonic Focus Plus Gold 650W

Case: Corsair Crystal 460X

OS: Windows 10 x64 Pro Version 1607

Retro machine:

Spoiler

PC Specs:

CPU: Intel Core 2 Quad Q9550

CPU Cooler: Stock heatsink

RAM: GSkill 4gb DDR2 1066mt/s

Motherboard: Asus P5n-e SLI

GPU: 8800 GTS 640mb, I swap between that and my 8800 GTS 512mb

Storage: Seagate 320gb right from 2006

PSU: Ultra 600W 

Case: Deepcool Tesseract SW

OS: Windows XP SP3 32-bit, Linux Mint 18.2 Cinnamon 64-bit, Manjaro Deepin x64 (sorta)

Mac Pro Early 2008: Dual Xeon X5482s w/ 32GB RAM & HD 5770 running macOS High Sierra

More PC's

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, panther420 said:

They should be pretty close to each other so yeah, I'll probably use DAC cables. Are they any cheaper out of curiosity?

Can be, mainly it's just safer to use DAC if you can as the cables are far stronger.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, panther420 said:

They should be pretty close to each other so yeah, I'll probably use DAC cables. Are they any cheaper out of curiosity?

Usually. But if you're making a long run it will be sometimes be cheaper by the foot to run fiber. The nice thing about using DAC cables is their simplicity and durability.

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, leadeater said:

Can be, mainly it's just safer to use DAC if you can as the cables are far stronger.

 

6 minutes ago, Razor02097 said:

Usually. But if you're making a long run it will be sometimes be cheaper by the foot to run fiber. The nice thing about using DAC cables is their simplicity and durability.

Ok. DAC it is!

Spoiler

My main desktop, "Rufus":

Spoiler

PC Specs:

CPU: AMD Ryzen 5 1600

CPU Cooler: Cooler Master MasterLiquid Lite 120

RAM: 2x8gb Corsair Vengence DDR4 Red LED @ 3066mt/s

Motherboard: MSI B350 Gaming Pro Carbon

GPU: XFX RX 580 GTR XXX White 

Storage: Mushkin ECO3 256GB SATA3 SSD + Some hitachi thing

PSU: Seasonic Focus Plus Gold 650W

Case: Corsair Crystal 460X

OS: Windows 10 x64 Pro Version 1607

Retro machine:

Spoiler

PC Specs:

CPU: Intel Core 2 Quad Q9550

CPU Cooler: Stock heatsink

RAM: GSkill 4gb DDR2 1066mt/s

Motherboard: Asus P5n-e SLI

GPU: 8800 GTS 640mb, I swap between that and my 8800 GTS 512mb

Storage: Seagate 320gb right from 2006

PSU: Ultra 600W 

Case: Deepcool Tesseract SW

OS: Windows XP SP3 32-bit, Linux Mint 18.2 Cinnamon 64-bit, Manjaro Deepin x64 (sorta)

Mac Pro Early 2008: Dual Xeon X5482s w/ 32GB RAM & HD 5770 running macOS High Sierra

More PC's

 

Link to comment
Share on other sites

Link to post
Share on other sites

If you want cheap DAC cables try fs.com, either to buy DAC or SFPs, those products work great and are cheap! there is all kind of brand compatible equipment for Cisco, Dell, HP, etc..

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×