Jump to content

charlesshoults

Member
  • Posts

    2
  • Joined

  • Last visited

Awards

This user doesn't have any awards

charlesshoults's Achievements

  1. It sounds like a purpose-built switch is the better way to go. Right now, I have two D-Link DGS-1210-28 switches, they each have four 1G SFP sockets, currently unused. I suppose they could allow for low latency links between the switches, but I have them serving different subnets. I like them because they're fairly compact, wall-mountable and virtually silent. Once I have my second unRaid server ready to go, perhaps I'll replace one of those two switches with a model that provides for 10G.
  2. This summer, I started moving as much data as possible from a 2-bay NAS to an unRaid server built from a repurposed MSI X99S Gaming 7 running an i7-5930K, two cache drives, two parity drives and five data drives. Once I did that, I started planning a backup of that system, that right now is running on an ECS H55H-I motherboard and seven 3.5" drives. My plan is to continue to use WD SmartWare to make incremental backups to my little NAS and use it for nothing else. Nightly, that data will get synchronized to my primary unRaid server and I'll work out some kind of schedule to sync data to the other, figuring out how to do integrity checks and so on so that if file corruption takes place, I have some sort of window to copy it back over from a known good copy. A third level of storage will be made to an array of SSD drives and kept in a safe deposit box. Cloud storage, when I would want at least 5TB, is kind of expensive and I don't really trust the privacy of it all. After I started moving terabytes of data from my old NAS into unRaid, I decided to pick up a few Mellanox 10g fiber cards to make transfers between systems just a little faster. On all of my desktop PCs, and anywhere it's financially practical, I've done away with rotational drives and run entirely from SSDs, one PCIe and one SATA NVMe drive in my main desktop. I'm not bound by drive speed so much, but I am bound by limitations of the interface and to some degree, limitations of Windows 10. My desktop PC has three network interfaces in use. One connecting to a segregated array of Raspberry Pi (behind pfSense), one connected peer-to-peer to my unRaid server and one connected to my router to communicate with everything else. I want to streamline things and am planning to build a new pfSense router using a motherboard/cpu combination that's robust enough to operate three 2-port Mellanox cards, each running on 8 PCIe lanes. The most likely prospects at this point are an i7-5820K running on an ASRock X99 Professional X99, 2011-3. For that setup, I'm looking to use all 28 PCIe lanes (8 x 3 (fiber) + 4 (NVMe)), using one of the onboard interfaces for WAN and the other with a 24-port D-Link switch. My questions are two. Have any of you run pfSense using fiber before, and if so, can it be made to do so stable? Second, if all I'm using it for is pfSense, what power supply rating would you expect to be adequate for this kind of build? I don't know the power requirements of the Mellanox cards. I'll set it up for 10g initially but will likely move to 40g within a year. If possible, I'd like to drive the thing using a Corsair SF600 or an SF450. This will be a wall mounted server and the only wires should be power and network.
×