Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

road hazard

Member
  • Content Count

    34
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About road hazard

  • Title
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thank you all for the recommendations! Went with the Seasonic M12II and some splitters. @Sat1600 @mariushm @Juular
  2. Well call me silly. Just discovered something...... after some more testing copying files to/from the RAID 6 array (using a new SSD), it appears my RAID 6 array is capable of writing at 500MB/s (about the max read speed of my new SSD). I did some benchmarking on my backup server and the SSD in there is capable of reading at around 550MB/s. So..... why can I only write to my RAID 6 array (over 10Gbps connection) at 300'ish MB/s? I tried setting the MTU to 9000 and that got me a slight speed bump then xfers started crawling at 70MB/s so I reverted the change. What can I look into to get more speed over the DAC?
  3. Quick update. Did some iperf testing this morning and here are the results. (I'm no iperf expert so this info might be redundant but I'll include text from the server and client.) Server: iperf -s -B 192.168.10.1 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.10.1 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 48749 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 10.9 GBytes 9.39 Gbits/sec [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 48767 ------------------------------------------------------------ Client connecting to 192.168.10.2, TCP port 5001 Binding to local address 192.168.10.1 TCP window size: 2.68 MByte (default) ------------------------------------------------------------ [ 6] local 192.168.10.1 port 36691 connected with 192.168.10.2 port 5001 [ 6] 0.0-10.0 sec 9.97 GBytes 8.56 Gbits/sec [ 4] 0.0-10.0 sec 8.62 GBytes 7.39 Gbits/sec [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 57933 [ 4] 0.0-10.0 sec 10.9 GBytes 9.39 Gbits/sec ------------------------------------------------------------ Client connecting to 192.168.10.2, TCP port 5001 Binding to local address 192.168.10.1 TCP window size: 2.14 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.10.1 port 40939 connected with 192.168.10.2 port 5001 [ 4] 0.0-10.0 sec 10.9 GBytes 9.33 Gbits/sec [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 45261 [ 4] 0.0-60.0 sec 65.6 GBytes 9.40 Gbits/sec From the client: iperf -c 192.168.10.1 -B 192.168.10.2 ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.10.2 port 48749 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 10.9 GBytes 9.40 Gbits/sec iperf -c 192.168.10.1 -B 192.168.10.2 -d ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 1.96 MByte (default) ------------------------------------------------------------ [ 5] local 192.168.10.2 port 48767 connected with 192.168.10.1 port 5001 [ 4] local 192.168.10.2 port 5001 connected with 192.168.10.1 port 36691 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 8.62 GBytes 7.40 Gbits/sec [ 4] 0.0-10.0 sec 9.97 GBytes 8.56 Gbits/sec iperf -c 192.168.10.1 -B 192.168.10.2 -r ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 2.73 MByte (default) ------------------------------------------------------------ [ 5] local 192.168.10.2 port 57933 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 10.9 GBytes 9.40 Gbits/sec [ 4] local 192.168.10.2 port 5001 connected with 192.168.10.1 port 40939 [ 4] 0.0-10.0 sec 10.9 GBytes 9.32 Gbits/sec iperf -c 192.168.10.1 -B 192.168.10.2 -t 60 ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.10.2 port 45261 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 65.6 GBytes 9.40 Gbits/sec Also including a 'during' and 'after' screenshot from System Monitor. (You'll see some CPU spiking in the 'after' picture. I don't think that was the 10Gbps traffic (which did consume a tiny bit of CPU power) but this server runs Plex and Emby and Emby was doing SOMETHING and I think that is what caused the random CPU spikes during the end. Long story short, it looks like I'm getting the promised speed. Unfortunately, my I/O subsystem can only write to my RAID 6 array at around 300MB/s sustained. It will spike to 500MB/s for small'ish files but when copying 15, 20 gig files...... sorta tops out around 300+MB/s. Because of that, I don't see a need to tweak or fine tune anything since I can't write it any faster. lol Although..... maybe I'll look into using an SSD to cache things?!?!?!? But, I don't think MDADM supports caching. Oh well. Beats 110MB/s over 1 gig. I didn't use a switch. Just a passive DAC cable between the servers and edited their hosts file to point to each other. Both servers' 10Gbps NICs are using static IPs and subnet masks (no DNS or gateway entries) and they can both get out on the internet using their 1Gbps NICs but when talking to each other, all traffic flows at 10Gbps! Well, maybe 3Gbps. hahaha The cards were $40 each and $30 for the DAC cable.
  4. Building a backup media server. Have every other piece already sitting here or on its' way. I just can't find a quality power supply that has 6 molex connectors. I bought a Rosewill 4312 and the fan wall/hdd cages are powered by 6, molex plugs. Since this power supply will be powering 3, 120mm fans AND about 10 3.5 HDDs, I'm not tooooo keen on using those splitter cables. Unless you experts think that is nothing to worry about. I mean, I know the 120mm fans won't stress the splitters but just worried about the drives. Plus, for as wide as I need things to spread out, I'm afraid of having to use 2 splitters in a row. Or, if splitters are OK, guess I just need a p/s with 3 molex connectors (each on its' own cable) so I can use 1 splitter on each molex. This server will be on pretty much 24x7 so I need a quality power supply. For the budget, I'd like to stay around $100 (under $100 if I can). My current Super Micro server (with 20 HDDs, i7-7700K CPU and 16GB RAM) only draws around 250W-350W. So I think a low end i3 with 24GB RAM and only 10 HDDs should be perfectly happy with a 500W p/s. Thanks
  5. Will let you know! I have some Mellanox cards and a DAC cable arriving in a few days. I decided to skip on using the switch and just connecting to the two servers via DAC, leaving their normal 1Gbps NICs for internet/LAN traffic. I think I mentioned somewhere else that my RAID 6 (MDADM) array maxes out around 300-500MB/s. A 10Gbps connection is capable of (theoretically) 1,250MB/s. So..... if I can get at least 1/2 the max of a 10Gbps connection (which I'm sure will not be a problem), I'll be a happy camper! Heck, at that point, my I/O sub-system will be the bottleneck and I'm fine with that. For now.
  6. The Dell 5524 (if I go the route of needing a switch) is AROUND $80-90 on eBay. The Mellanox cards are about $40 each. Your cable price is cheap but unless I'm missing something, the switch and NICs you reference make that route more expensive.
  7. I think I just need a few more things answered before I buy everything........ I'm probably going to go with Mellanox MNPA19-XTR Connect X-2 cards. So long as I can hit about 300-500MB/s (the max speed of my RAID 6 array), I'm good. Now, I know I can install the 10Gbps cards into my main server and backup server and join them with a DAC cable (to talk to each other) and use their 1Gbps NICs to attach to the switch and get to the internet and talk to other devices on the network (thus, avoiding the need for a 10Gbps switch) but..... lets say I can't figure out the routing issues and want to just hook the 10Gbps NICs into the Dell 5524, along with all my other devices,........... what kind of cable goes from the 10Gbps NICs into the switch? At that point, do I buy, 4, fiber optic SFP+ transceivers for the switch and 10Gbps NICs and a few feet of fiber optic cable or do I buy just TWO DAC cables and hook each 10Gbps NIC into the switch that way?
  8. I thought about the headless route but since I'm still a bit of a newbie with Linux, I figured it would be better to have a nice and fancy GUI for things. Thanks for the tutorial! Once I get the 10Gbps NICs, I'll give it a shot. If all else fails, I'll give in and just buy a 10Gbps switch. . Probably a Dell PowerConnect 5524 off eBay.
  9. Ok, thanks! Yeah, can't really see anything horrible about them review wise (other than lack of cache or BBU) but those are 2 things I'm not worried about so I guess I'll go ahead and grab one.
  10. Not to sound dumb but....... I'm dumb when it comes to networking. That's great news that I can maybe pull this off without a 10Gbps switch. If you can break down what I need to do, as if you're talking to a 5 year old, I'd greatly appreciate it! In the end, server A and server B will both have 1G and 10Gbps NIC. I need both to be able to get out to the internet over their 1G NICs (easy enough... just plug their 1G NICs into the switch) but when they talk to each other, I need all communication to go over the 10Gbps NICs.
  11. If you can think of a better RAID 1 card for Linux, I'm all ears! Probably buy it on Cyber Monday so I got a few more days of research left if a better card pops up.
  12. Yup, RAID is not backup. And the reason I want to use RAID 1 for my boot drive is because when my main drive failed, I tried to use Timeshift to restore onto another drive and Timeshift failed me. So..... RAID 1 for boot drive and Timeshift just to have copies of all the data as a fallback.
  13. I have a media server running Mint 19.2 that is always on and has a bunch of 4TB drives in a MDADM RAID 6 array (md0). In that same server, I have a bunch of 8TB drives in a MDADM RAID 5 array (mdbackup) that the RAID 6 array is backed up to. I want to move the backup array to a separate server in case of some major hardware failure (water damage, falling off the table, etc). Because right now, if a pipe leaks onto my main server and fries the backplane or something, I'd love EVERYTHING. I want to physically separate the arrays. So, I'm in the process of piecing together another server. The amount of data I have is in the 35TB range. To do a full restore, over a 1GB network, it would take days and days. If I switch to 10Gbps, it would only be hours. My MDADM array can write at around 500MB/s so a 10GB network would be perfectly fine. BUT, to keep cost low, could I get away with just buying some 10Gbps network cards and using a DAC connection between them (eliminating the need for 10Gbps SFP+ switch)? The only 2 PCs on my network that would need 10Gbps speed are the main media server and the backup one when they are talking to each other but, I'll still need both to be able to route out to the internet on their 1G NICs that will plug into my 10/100/1000 switch. Problem is, I don't know how to accomplish any of that. Is this easily doable or should I just pick up a switch that has 16 or 24, 10/100/1000 ports plus 2, SFP+ ports?
  14. Like in my other reply, there just doesn't seem to be a simple, easy to follow, foolproof guide (at least I haven't found it yet) that walks you through setting up an MDADM boot array during the installation of a non-server Linux distro. If I get some time, I'll look into ways to try and convert a ZFS (or BTRFS) boot drive into a mirror setup. I'm a tiny bit reluctant to use ZFS or BTRFS file systems due to bad experiences with both many, many months ago.
  15. Can you recommend an exact LSI model I should look for? What are your thoughts on the LSI 9240-8i for RAID 1? Trust me, I'd love to use MDADM on my boot drive but have you ever tried to set that up with a desktop Linux distro? It's a nightmare and I've never had any success with it. It looks like just using a hardware RAID card for mirroring boot drives is about 10,000 time easier. I use a MDADM RAID 6 array for my data drives and am very familiar with how awesome it is but setting it up on boot drives............ amazingly difficult for me. If you know of a guide for using MDADM on boot drives (that doesn't involve 347 commands) and works perfectly with non-server distros, I'd love to see it. I've found a few and non have worked for me. I have a few weeks before all my gear arrives to build this backup server so I'm going to keep experimenting with getting MDADM to work but it just seems dropping in a hardware RAID card would make my life easier.
×