Jump to content

road hazard

Member
  • Posts

    50
  • Joined

  • Last visited

Everything posted by road hazard

  1. I decided on option 2 and got the 826 chassis but it has 3 fans behind the back plane and each one has a 4 pin connector. The ATX board I'm installing in there only has one, 4 pin connector free. I think my back plane is a SAS826A and I noticed it has 3, 4 pin connectors labeled as: I2C across the back. Will the fans work being powered off those?
  2. THANKFULLY, this wasn't my main server but it was a secondary server that I back the main one up to and it needs fixed ASAP. It was a 4U Rosewill server (RSV-L4412U), using a Gigabyte ATX board and a Seasonic power supply. It's been humming along for years with zero problems and then today, the power supply suffered a massive failure. When I shake the ps, there's something lose in there. UPS flipped out, LOUD pop, etc., etc. I put a new ps in the system and turned it back on to see what all was damaged. While Debian 11 is booting, I see this at the top of the screen for a while: 'mpt2sas_cm1: overriding NVDATA EEDPTagMode setting' ...... but eventually the OS loads and I'm at the desktop. Looking in the 'Disks' app, I see that my MDADM array is offline (expected because....) of my 11, 8TB drives, only 8 are showing up. I have two, LSI 9207-8i HBA's installed. Card 1 has 2 SFF cables going to it (and controlling a total of 8 drives) and the 2nd card has 1 SFF cable (controlling 3 drives). At this point you're probably thinking....."well duh, the card with the 3 drives is the culprit" but that's not so. Because the 3 drives that card 2 is controlling.... 2 of the 3 show up. And on the first card, 2 of the 8 drives it's controlling aren't showing up. I pulled the drives from their disk trays and checked them individually in a USB dock on another system and the other Linux box was able to see that they're all part of an array so I'm fairly confident (crossing my finders) that the data on them is intact. After some troubleshooting and sitting down to think, I don't believe the motherboard or HBAs are damaged. It's looking more and more like the back plane in the Rosewill is at fault. I think this because as I mentioned above, one of the HBAs can see 2 of its' 3 drives. If the PCI slots were damaged, I don't think EITHER HBA would work or either HBA would see ANY of their attached drives. Sure, I could do some more swapping of cards/drives but I don't want to spend any more time on it and I need to get the backup system up and running fast. Would appreciate your thoughts on all that but my big question is this..... what's the best way out of this mess? Option 1. $360 - Buy a new Rosewill RSV-L4412U and a new power supply and put everything back together. Option 2. $219+$80+100=$400 Which is a SuperMicro CSE-826+ 3, quiet, replacement fans (FAN-0104L4)+$100 (or so) for the quiet version of a power supply because stock SuperMicro power supplies are...... noisy. (Any problems with 8TB SATA drives with the SAS826A back plane?) Option 3. $600 - Buy a CSE-847 (BPN-SAS3-846EL1 back plane). The reason I was thinking of this route is because I currently have 20'ish 4TB drives in my main server and I was thinking about expanding. I like using 4TB drives to reduce resilver times so I could move my existing motherboard into this new unit and take the guts from the Rosewill case and put them in my existing SC-846 chassis. BUT..... the CPU cooler I'm using in my 24 bay SC-846 won't fit in the SC-847 so I'll need a low profile cooler for my i7 7700K so not a huge deal, just mo' money. So there you have it............ would greatly appreciate any and all comments!
  3. I bought them directly from their website and I have some good news to report. After a few emails, they agreed to process a RMA for the PC that won't boot. I had to pay $99 for a restocking fee (due to its' age and them not having any units with that particular CPU). Depending on how smooth this transaction goes, I'll work on getting the other two replaced.
  4. I bought 3 of their PCs and PC 1 refuses to boot after 1 year and 8 months of perfect service. PC 2 can successfully boot 1 in 50 tries otherwise you get stuck in a BIOS setup/boot loop, and it has horrible coil whine. Got about 1 year and 4 months of perfect service from it before the boot loop problem started. The coil whine started a few months after it arrived but due to where it sits, it wasn't a huge deal at the time which lead me to buying..... PC 3, which was purchased as a replacement for PC 2 (due to the horrible coil whine) and wouldn't you know it............ this one developed coil whine as well. I've tried replacing the RAM, SSD and resetting the CMOS..... nothing helped. (I know that coil whine is a hardware problem but I did all this to satisfy their useless tech support.) Sure, their Warranty states 2 years but they told me to pound sand via email and that I was outside of their warranty period despite these systems all being less than 2 years old. If you're thinking about buying one of their systems, DON'T DO IT! Spend the extra money on an Intel NUC 11/12 with the integrated Nvidia GPU! I doubt Chase and PayPal will be able to help, especially since it's a foreign company so it looks like I'll be dropping these pieces of junk off to an e-waste recycler. Almost $2,000 down the toilet. ;( Screw you Minisforum!
  5. I got it working. Tried flashing 22a (beta) and 22 final version and no change. Still locked out of the BIOS settings. No video until Windows loads.
  6. I tried using all the video ports on the motherboard (both HDMI ports and the Display port) and still no POST screen. The board refuses to output video until Windows loads the desktop.
  7. I was using a USB 3.0 thumb drive and no go. I switched to a USB 2.0 one and it worked. Well..... it re-flashed and upon reboot.... still no POST screen. Just blackness and then, up pops the Windows desktop.
  8. I copied the F22 BIOS to a FAT16 thumb drive and followed the manual on which port to plug it into and when to hit the Q-Flash Plus button but my board isn't doing what the manual says it should be doing when I attempt to flash it that way. No USB activity and no little LED light when I press the Q-Flash Plus button. Going to try some post I found about booting a backup BIOS (not even sure if my board has a dual BIOS option). If this fails, I'm probably just going to order a new system board. UPDATE: Struck out with trying to get the system to boot from a backup BIOS chip. I think I'm going to throw in the towel and order an ASUS board and never buy another Gigabyte product ever again. Not getting any video from either HDMI port, the Display port or even a friends RTX 2070 until after the Windows desktop loads. I'm permanently locked out of the BIOS apparently.
  9. Totally sorry about the text formatting .... I think I fixed it. Ok, I'll try reinstalling F22. As for clearing the CMOS, I disconnected the red/black cable that goes from the battery to the motherboard. I found a new mini-ITX board on Amazon and I have 9 hours to place my order for next day delivery. lol ..... so if I can't get this working between now and then, I'm going to fix it by replacing the board.
  10. My system was running Windows 10 perfectly and every time I booted my PC, I always saw the POST screen. I was getting ready to upgrade to Windows 11 so I thought....let me apply any BIOS updates first. BIG MISTAKE! I downloaded the F22 BIOS to a thumb drive and restarted. I entered the BIOS and selected Q-Flash and browsed to my F22 BIOS file and updated and the PC did a countdown and restarted. When it was booting up, my monitor kept saying 'no signal, check cable'. I sat there and was thinking...."weird, I didn't see the POST screen like I always use to...." ...... I was getting ready to pull the plug on the power supply thinking my PC was hung and up popped the Windows 10 desktop. I restarted and same thing......no POST screen and no video output and then my desktop pops up. I didn't think anything of it and plugged in my Windows 11 thumb drive and went to reboot my PC so I could select it as my boot device to install Win 11 but no matter what I did, I don't get a POST screen. Just 'no signal, check cable' and up pops my Windows 10 desktop. I restarted again and held down the DEL key, no change. I thought to clear the CMOS and followed the instructions to short the 2 pins and powered my system back up and same problem. Then, I took the board out of my system and removed the plastic shroud that covered the CMOS battery (awesome engineering choice Gigabyte! /s) and disconnected the cable and let it sit like that for about 5-10 minutes and put everything back together and no change. My monitor still displays 'no signal, check cable' for about 15-20 seconds then up pops the Windows 10 desktop. And it looks like Gigabyte made it impossible to revert BIOS versions after upgrading to F22 so now I'm completely locked out of the BIOS. HOW IN THE WORLD do I fix this? I tried booting Windows 10 into that troubleshooting mode that will let me change the UEFI BIOS settings but when I do that and click 'Restart', my PC reboots but I get 'no signal, check cable' on my monitor. Even if I -could- access the BIOS settings from within Windows and select my thumb drive as the boot device, I don't think this will help me since I can't seem to get any video output until the Windows desktop has loaded. BTW, I'm not using any video card, just the integrated graphics. I made sure that 'fastboot' stuff was turned off in Windows and it was. I also installed the Fastboot Gigabyte utility and when I launch it, it says it's disabled. I clicked on the 'Enter BIOS now' button from within that Gigabyte Fastboot utility and after a reboot.....black screen forever until I physically reboot. Gigabyte's atBIOS utility is garbage. I installed the latest version from their site and when it runs, I get an error about how I need to apply the latest version. I only have the option to click 'OK' and then it exits. If Gigabyte ever fixes their complete screw up and releases an F23 BIOS update, I'll have to use the push button Q-Flash update method which I have zero hope in it working properly. Rolling back to an older BIOS version is not an option since this F22 version fixes some super duper critical vulnerability and implements some new 'capsule' mode that prevents reverting......... and also prevents you from ever accessing the BIOS settings. I've never been so angry with a board manufacturer in my life. If I can't get this fixed soon, I'm going to buy a new board from anyone other than GB and will avoid them like the plague going forward. I tried installing Windows 11 to a different SSD and then installed that into the Gigabyte system but I'm guessing it was sitting at some BIOS error screen and I needed to select the new SSD as the primary boot device but since I can't see it, I put the old drive back in and Windows 10 loads fine. I have a ticket opened with GB support but not holding out much hope there.
  11. @SupaKomputa @Archer42 Thanks for your replies! To close the loop on this, I installed a Samsung 970 EVO Plus SSD 500GB (M.2 NVMe, model # MZ-V7S500B/AM) and added it to my fstab (using Debian 10.x) and switched NZBGet over to use it for everything (download, intermediate, tmp). I started grabbing some 100+ gig files from Usenet and as it was repairing/unpacking/copying/downloading all on the same drive, combined I/O bandwidth (read/write) was about 1,500-1,700MB/s and it was NOTICEABLY faster than when I was using my SATA 2.5" SSD and the mdadm RAID 6 array. All in all, completely happy with the purchase! I have an internal case fan that is keeping the 10Gbps LAN and HBA card cold with the M.2 sitting between those slots so it's getting a nice, steady breeze of cold air being pushed down onto it. Been using it for a few hours now and so far so good!
  12. I know what M.2 SSDs are but this will be the first time I've ever used one. Motherboard: Gigabyte GA-Z170X-UD5 PCIEX16 slot - HBA installed PCIEX8 slot - 10Gbps network card installed PCIEX4 slot - empty Looking in my manual, it says this: 'The PCIEX4 slot shares bandwidth with the M2H_32G connector. The PCIEX4 slot will become unavailable when an SSD is installed in the M2H_32G connector.' Now since that slot is empty, no problem. So far so good. My board has 2, 2.5" SSDs attached to SATA3_0 and SATA3_1. In the chart for my motherboard, it has (for the M2H_32G connector), little check marks to indicated a M.2 PCIe x4 SSD is supported and it won't kill SATA3_0 or SATA3_1. (Which is good since I need those. ) I plan on buying a PCI Express 3.0 x4 (NVMe) Solid State Drive and slapping it into the M2H_32G slot. Am I good to go and my understanding is correct? General usage question: my media server downloads a lot of stuff from Usenet. Right now, NZBGet is configured to use my MDADM RAID 6 array to hold the interim download parts and is also where the files are unpacked to and copied to. My mdadm array is capable of sustained reads/writes of about 400-500MB/s but the files I download are in the 100 gig range and this makes my array cry and the download/repair/unpack process D_R_A_G_S! I switched over to using my boot SSD for some of these tasks and while it sped things up a bit, it still kinda dragged. So, I was looking at using an M.2 SSD and tapping into those 3500MB/s read/write speeds to make things go way quicker. I realize that M.2 might not have the longevity of a traditional 2.5" SSD but if the M.2 ever dies on me during a download, I won't cry a river about it. So, will utilizing an M.2 in this scenario deliver me to the promised land of faster NZBGet operations?
  13. I just got done conducting another test. I pulled the Linux boot SSD from server A and replaced it with another SSD (rated for over 500MB/s read/write) and installed Windows 10 on there and left the 10Gbps NIC in. I pulled the 2nd 10Gbps NIC out of the other Linux Mint box and placed it into my Windows 10 PC (which also has a SSD capable of 500+MB/s read/write). I copied about 50 gigs worth of data over the 10Gbps connection (Windows 10 on both PCs) and........... I was maxing out my SSD and getting about 480-500MB/s!!! This was with default Windows 10 settings (1500 MTU, etc. No tweaking.) So, looks like my 10Gbps cards and DAC cable are 100% fine..... it's a software tuning problems with Linux that needs figuring out.
  14. No worries. I bought the M12II locally at MicroCenter and will have zero problems returning it. I'll see if they have this Corsair model and just swap it out. If not, I'll order that one off Newegg.
  15. Thank you all for the recommendations! Went with the Seasonic M12II and some splitters. @Sat1600 @mariushm @Juular
  16. Well call me silly. Just discovered something...... after some more testing copying files to/from the RAID 6 array (using a new SSD), it appears my RAID 6 array is capable of writing at 500MB/s (about the max read speed of my new SSD). I did some benchmarking on my backup server and the SSD in there is capable of reading at around 550MB/s. So..... why can I only write to my RAID 6 array (over 10Gbps connection) at 300'ish MB/s? I tried setting the MTU to 9000 and that got me a slight speed bump then xfers started crawling at 70MB/s so I reverted the change. What can I look into to get more speed over the DAC?
  17. Quick update. Did some iperf testing this morning and here are the results. (I'm no iperf expert so this info might be redundant but I'll include text from the server and client.) Server: iperf -s -B 192.168.10.1 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.10.1 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 48749 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 10.9 GBytes 9.39 Gbits/sec [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 48767 ------------------------------------------------------------ Client connecting to 192.168.10.2, TCP port 5001 Binding to local address 192.168.10.1 TCP window size: 2.68 MByte (default) ------------------------------------------------------------ [ 6] local 192.168.10.1 port 36691 connected with 192.168.10.2 port 5001 [ 6] 0.0-10.0 sec 9.97 GBytes 8.56 Gbits/sec [ 4] 0.0-10.0 sec 8.62 GBytes 7.39 Gbits/sec [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 57933 [ 4] 0.0-10.0 sec 10.9 GBytes 9.39 Gbits/sec ------------------------------------------------------------ Client connecting to 192.168.10.2, TCP port 5001 Binding to local address 192.168.10.1 TCP window size: 2.14 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.10.1 port 40939 connected with 192.168.10.2 port 5001 [ 4] 0.0-10.0 sec 10.9 GBytes 9.33 Gbits/sec [ 4] local 192.168.10.1 port 5001 connected with 192.168.10.2 port 45261 [ 4] 0.0-60.0 sec 65.6 GBytes 9.40 Gbits/sec From the client: iperf -c 192.168.10.1 -B 192.168.10.2 ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.10.2 port 48749 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 10.9 GBytes 9.40 Gbits/sec iperf -c 192.168.10.1 -B 192.168.10.2 -d ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 1.96 MByte (default) ------------------------------------------------------------ [ 5] local 192.168.10.2 port 48767 connected with 192.168.10.1 port 5001 [ 4] local 192.168.10.2 port 5001 connected with 192.168.10.1 port 36691 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 8.62 GBytes 7.40 Gbits/sec [ 4] 0.0-10.0 sec 9.97 GBytes 8.56 Gbits/sec iperf -c 192.168.10.1 -B 192.168.10.2 -r ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 2.73 MByte (default) ------------------------------------------------------------ [ 5] local 192.168.10.2 port 57933 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 10.9 GBytes 9.40 Gbits/sec [ 4] local 192.168.10.2 port 5001 connected with 192.168.10.1 port 40939 [ 4] 0.0-10.0 sec 10.9 GBytes 9.32 Gbits/sec iperf -c 192.168.10.1 -B 192.168.10.2 -t 60 ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 Binding to local address 192.168.10.2 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.10.2 port 45261 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 65.6 GBytes 9.40 Gbits/sec Also including a 'during' and 'after' screenshot from System Monitor. (You'll see some CPU spiking in the 'after' picture. I don't think that was the 10Gbps traffic (which did consume a tiny bit of CPU power) but this server runs Plex and Emby and Emby was doing SOMETHING and I think that is what caused the random CPU spikes during the end. Long story short, it looks like I'm getting the promised speed. Unfortunately, my I/O subsystem can only write to my RAID 6 array at around 300MB/s sustained. It will spike to 500MB/s for small'ish files but when copying 15, 20 gig files...... sorta tops out around 300+MB/s. Because of that, I don't see a need to tweak or fine tune anything since I can't write it any faster. lol Although..... maybe I'll look into using an SSD to cache things?!?!?!? But, I don't think MDADM supports caching. Oh well. Beats 110MB/s over 1 gig. I didn't use a switch. Just a passive DAC cable between the servers and edited their hosts file to point to each other. Both servers' 10Gbps NICs are using static IPs and subnet masks (no DNS or gateway entries) and they can both get out on the internet using their 1Gbps NICs but when talking to each other, all traffic flows at 10Gbps! Well, maybe 3Gbps. hahaha The cards were $40 each and $30 for the DAC cable.
  18. Building a backup media server. Have every other piece already sitting here or on its' way. I just can't find a quality power supply that has 6 molex connectors. I bought a Rosewill 4312 and the fan wall/hdd cages are powered by 6, molex plugs. Since this power supply will be powering 3, 120mm fans AND about 10 3.5 HDDs, I'm not tooooo keen on using those splitter cables. Unless you experts think that is nothing to worry about. I mean, I know the 120mm fans won't stress the splitters but just worried about the drives. Plus, for as wide as I need things to spread out, I'm afraid of having to use 2 splitters in a row. Or, if splitters are OK, guess I just need a p/s with 3 molex connectors (each on its' own cable) so I can use 1 splitter on each molex. This server will be on pretty much 24x7 so I need a quality power supply. For the budget, I'd like to stay around $100 (under $100 if I can). My current Super Micro server (with 20 HDDs, i7-7700K CPU and 16GB RAM) only draws around 250W-350W. So I think a low end i3 with 24GB RAM and only 10 HDDs should be perfectly happy with a 500W p/s. Thanks
  19. Will let you know! I have some Mellanox cards and a DAC cable arriving in a few days. I decided to skip on using the switch and just connecting to the two servers via DAC, leaving their normal 1Gbps NICs for internet/LAN traffic. I think I mentioned somewhere else that my RAID 6 (MDADM) array maxes out around 300-500MB/s. A 10Gbps connection is capable of (theoretically) 1,250MB/s. So..... if I can get at least 1/2 the max of a 10Gbps connection (which I'm sure will not be a problem), I'll be a happy camper! Heck, at that point, my I/O sub-system will be the bottleneck and I'm fine with that. For now.
  20. The Dell 5524 (if I go the route of needing a switch) is AROUND $80-90 on eBay. The Mellanox cards are about $40 each. Your cable price is cheap but unless I'm missing something, the switch and NICs you reference make that route more expensive.
  21. I think I just need a few more things answered before I buy everything........ I'm probably going to go with Mellanox MNPA19-XTR Connect X-2 cards. So long as I can hit about 300-500MB/s (the max speed of my RAID 6 array), I'm good. Now, I know I can install the 10Gbps cards into my main server and backup server and join them with a DAC cable (to talk to each other) and use their 1Gbps NICs to attach to the switch and get to the internet and talk to other devices on the network (thus, avoiding the need for a 10Gbps switch) but..... lets say I can't figure out the routing issues and want to just hook the 10Gbps NICs into the Dell 5524, along with all my other devices,........... what kind of cable goes from the 10Gbps NICs into the switch? At that point, do I buy, 4, fiber optic SFP+ transceivers for the switch and 10Gbps NICs and a few feet of fiber optic cable or do I buy just TWO DAC cables and hook each 10Gbps NIC into the switch that way?
  22. I thought about the headless route but since I'm still a bit of a newbie with Linux, I figured it would be better to have a nice and fancy GUI for things. Thanks for the tutorial! Once I get the 10Gbps NICs, I'll give it a shot. If all else fails, I'll give in and just buy a 10Gbps switch. . Probably a Dell PowerConnect 5524 off eBay.
  23. Ok, thanks! Yeah, can't really see anything horrible about them review wise (other than lack of cache or BBU) but those are 2 things I'm not worried about so I guess I'll go ahead and grab one.
×