Status Updates posted by Razor Blade
Have I mentioned lately how much do I hate Windows 10? This time I've had enough.
I now hate it enough that I've ripped the poor little SSD with Ubuntu installed out of my laptop and put it in my main desktop... I been pushed enough that now I am finally switching all my productivity and internet usage to Linux. The way things are going, maybe gaming too.
Probably wouldn't be right to say I won't miss the good times with Windows...and if I said I wouldn't ever use Windows again for anything I would probably be lying... but for my use at home at least...it is time to say farewell...
- Bye Cortana... I'm going to miss finding creative ways to block you at the router level and disable your program without borking the start menu.
- Bye Partition manager... I'm going to miss how you can seemingly never delete a system partition from a freaking flash drive...I have to use diskpart instead.
- Bye Microsoft Edge... you only sat in the task bar really... but I guess it would be rude if I didn't say goodbye.
- Bye games and apps that Microsoft would randomly install taking up precious space on my tiny SSD because I mostly use network drives and shares for important crap...
- Bye iSCSI initiator... I'll still never figure out why you would delete my server's connection entry randomly on reboot...
- Bye Recycle Bin... I'm not going to miss all the times I had to go into the registry and delete the SMB address entries every time I would accidentally move files from a locked share into you instead of deleting them.
- Bye restrictive Microsoft network drivers that decided to disable all network teaming functionality in pro licenses (a feature they used to allow mind you) thereby only making it available to enterprise licenses...jerks!
- Bye Workstation service... I lost several days of sanity over you when you would randomly deny connections to SMB shares...
- Bye Microsoft Content Delivery Manager/service... I'm going to...well I'm not going to miss you at all.
- Show previous comments 2 more
@TopHatProductions115 I've been dealing with Microsoft crap for a long time. I grew up with Windows and Dos when I was a kid. Linux back 15 or so years ago was difficult enough just to download and install. Today it is unbelievably easy and the GUI has come a very long way too. I just have integrated Linux more and more into my life with other things, so it makes sense to take the leap.
@Bananasplit_00 No kidding...especially the last statement... though I like my..err Nvidia GPU, for several reasons I've been investing more and more in AMD.
Dell 0U691D module (10Gb SFP+ uplink module that goes in the back of the switch) = $109 + shipping on ebay...
Dell 6224P (24 port PoE managed enterprise switch) INCLUDING the 0U691D module = $105 shipped on ebay...
Well this is a first...old Asus laptop that doesn't post unless laying in pieces on my bench. "Customer" also says this 8 year old Asus laptop is 3 years old. Either they got screwed or their memory is going. Glad I don't do this crap professionally.
2/1/2016 - 2/26/2019
Rest in peace Samsung Note 4... Not even Odin was able to save you
@Imbellis Not able to really tell... may have been hardware failure or could have been an app that hosed something. I bought the phone used so no idea what the history was before I got it. Not to worry though, I transferred to my trusty S4 last night until I either get it repaired or replaced.
When people ask if PC is still relevant for gaming...Spoiler
Won auction for rack mount layer 3 switch on eBay for $33
10Gb module for said switch... $399.95 + shipping
Not bad for 10+ year old SAS drives in a RAID 10?
After fumbling around with ESXI 6.5 trying to pass through my old GTX950, I finally got it to work. If you're having a similar issue, hopefully this can help you.
First off, a disclaimer... This isn't a typical "fix" I normally would post and I don't know if all of this is necessary or not so make sure you check on your specific GPU to see if there is anything you need to do to enable passthrough properly instead...the following is more of a quick and dirty bypass to ESXI disabling the checkbox in the hardware menu and NVIDIA's hardware checking disabling the card if it detects it running in a VM... However there may be a good reason your GPU is prohibited from being passed through other than NVIDIA wanting you to purchase their enterprise cards instead...so...yeah...don't mess around with a computer that has all your data and stuff on it until you know this won't adversely affect function or reliability.
First issue with PCI passthrough you may run into. If your NVIDIA GPU is grayed out and you are unable to toggle passthrough...click this spoiler...Spoiler
First issue you may encounter is the GPU is grayed out under Host, Manage, Hardware. There is a cheat you can do to allow you to enable the checkbox using the page "inspect" feature in chrome or other browsers.
This enables the checkbox. Now you can check both the GPU and the audio device checkboxes and click the "Toggle passthrough" button at the top. Now restart your host. You should now see both devices you checked say "Enabled/Active" with the GPU now grayed out and checkbox disabled again.
Now either you have it enabled or you are now able to have it enabled. You should be able to assign the device to your VM by adding "PCI device" and select the GPU.
Now if you are running Windows with NVIDIA drivers and receive Code 43...click this spoiler...Spoiler
To fix code 43 in windows you will need to try to add the following parameter to the VM in ESXI...
hypervisor.cpuid.v0 = "FALSE"
Shut down your VM and put it in the parameter windows under "Actions", "Edit Settings", "VM options", "Advanced", "Configuration Parameters", "Edit Configuration..."
Then click "Add Parameter" and type it in so it looks like this....
After you add the parameter and click "OK" then "Save" you should be able to boot your VM and your GPU should work as normal.
I put it off and put it off... Finally I watched The Verge's PC build (a re-upload since the original is gone).....
Finally broke down and bought an Apple today...Spoiler
Actually I bought a bunch... Honeycrisp apples are freaking delicious...
and no this joke never gets old.
Over the past few weeks I've been planning on a mod for my R710 which included adding another 2.5" drive but wanted to make a cable instead of splicing into the OEM cable. There were a few reasons for this.
- The OEM optical drive cable has relatively thin gauge wire
- The OEM cable has a SATA cable molded into it
- The OEM cable only supports slim SATA connections (which would require an adapter)
So I set out to source some parts. Turns out TE Connectivity Micro Mate-N-Lock part #794617-4 connector and part #794610-1 socket pins are just the ticket to connect to the DVD/TBU_PWR port on the R710 motherboard.
I already had some 8 pin Micro Mate-N-Lock connectors laying around so I just snipped one of those connectors down... The above part number is for the appropriate 4 pin.
I also had a crimp style SATA cable from a modular PSU laying around. It was never used so the wires and connections are still in perfect condition.
I removed the 3.3V wire and crimped on the socket pins. I referenced the GP700 cable images found online to get the appropriate pinout. Going by the photos and how the OEM DVD cable is laid out, 5V positive and negative are at the top (closest to the tab) 12V positive and negative are on bottom.
Here was is completed cable
I'm not able to find much of a source on what the power draw of the Dell TBU is and I'm not sure what the capability of the DVD/TBU_PWR port is. For testing, I went with the "screw it" route... I used two 3.5" mechanical drives. Which the server seemed like it couldn't care less... No errors and nothing was even warm. The drives showed in the BIOS and hummed along happily for the few hours I left them during testing.
I'm going to be running a pair of SSDs so the power draw should be less than 3W per SSD which even if the SSD only runs the 5V rail (which it probably will) is still a bit less than the 5V combined rating on the mechanical drive's decals.
So hopefully this was informative to anyone that may want to make a cable for themselves.
I was wondering why a smoke detector ended up on my front porch with a note.... I'm not going to ask HOW a spider crawled into my smoke detector thereby setting it off randomly... all I know is that despite my best efforts to remove the spider threat...due to circumstances beyond my control, said smoke detector is no longer allowed in the house...ever
What did we learn today?
Sometimes being vague about the root cause of a problem might be a much better solution...on the bright side, I have a smoke detector to put in the shed.
Because just making fun of the Verge's build is too easy...
Lately I've been planning on upgrading the memory in my 11th gen Dell. Turns out it isn't really that simple. Seems like there are more considerations to take into account if you're going to be shelling out the money...otherwise sadness can follow. If you're looking into memory upgrades for your 11th gen Dell server maybe this information can help you...
First off for the 610 and 710 servers, here is a PDF to help you pick out a memory configuration.
(Second link is archived version on forum just in case the PDF disappears from Dell's site)
Archived link: server-pedge-installing-upgrading-memory-11g.pdf
The dual vs quad rank memory TL;DR:Spoiler
Understand that if you go with quad ranked memory you can only populate 2 slots per channel as the limit of ranks per channel on these servers is 8.
16GB PC3L 4Rx4 10600R
- only 2 sticks per channel for a total of 8 ranks (populating the 3rd channel will likely result in a memory configuration error on boot or even a no boot situation)
16GB PC3L 2Rx4 10600R
- all 3 channels can be populated for a total of 6 ranks
Memory speed: (source below: section 7 "memory" subsection 3 "speed")Spoiler
For the 11th gen Dell servers if it supports triple channel memory, each channel you populate will drop the speed of the memory. For example using one channel with 10600 memory will yield the full 1333Mhz but populating two channels will yield 1066Mhz. Populating all 3 channels will drop the speed to 800Mhz.
You may think that quad rank memory would run at a higher clock speed right? Nope... turns out it runs at 1066Mhz even when one channel is populated (see source below).
Now is it faster though? I'm not sure. Consensus seems to be that more ranks might be used for higher capacity modules? If that is true it wouldn't matter since though the 610 and 710 servers can run quad rank memory, they can't take advantage of DIMMs more than 16GB anyway. It seems as though quad rank modules *are* cheaper than dual rank modules (going by listings I've found on Ebay) so it could be a consideration...but even if they are cheaper, for the price you could get more density at the same speed using 8GB modules in a memory optimized configuration and be able to take advantage of the triple DIMM configuration as well as the higher RAM capacity overall (96GB vs 64GB). The only thing I can think of is you could save a bit of power if you are going for memory optimized configuration and don't need 96GB? Or maybe if someone already has quad rank DIMMs on hand already...
A note on UDIMMs: (source below: section 7 "memory" subsection 1 "overview")Spoiler
The 11th gen servers only support 2GB UDIMM modules up to 12 modules so you can have up to 24GB of UDIMM memory.
Power savings: (source below: section 7 "memory" subsection 5 "Low Voltage DIMMs")Spoiler
If you have a 5600 CPU it appears you can use the "low" voltage RAM that operates at 1.35V instead of 1.5V which could yield a power savings...though in a home lab where you may only be using one or two servers the power savings might be negligible (source below:"Low voltage DIMM power savings")...but I guess it is something if you're upgrading anyway. Either way it is nice to know that the support for low voltage DIMMs is there.
It appears that if you are wanting to upgrade your RAM capacity, figure out what you want and go with the highest density modules that fit in 3 dimms per channel per CPU for best performance.
For example if you want 48GB of RAM, you could populate all 3 slots on 2 channels across the dual CPUs with 4GB sticks of 2Rx4 10600R but your server will run at 1066Mhz memory speed where as if you populated one channel with 8 GB sticks you'll be able to take advantage of the full 1333Mhz speed supported by 5600 Xeons.
Is it important? Does it matter? probably not...but since these poor dinosaurs of a by-gone age of Westmere goodness are already at a disadvantage, why not squeeze the max performance out while they're still worth plugging into the wall?
If you already have the DIMMs you need to reach your RAM capacity it might not be worth spending a bunch of cash upgrading RAM just to get the higher clock speed...or it could be I guess it depends on your use case...but if you're ready to upgrade your RAM, It is probably worth considering upgrading density instead of quantity.
(Second link is archived version on forum just in case the PDF disappears from Dell's site)
Archived link: server-poweredge-r710-tech-guidebook.pdf
Low voltage DIMM power savings:
Dataram's article on 1.35V vs 1.5V DIMMs: http://www.dataram.com/blog/?p=102
Tom's shareware testing of HyperX LoVo: https://www.tomshardware.com/reviews/lovo-ddr3-power,2650.html
Might edit this in the future if I have more info to add or info to correct....
Why is RAM still so expensive? Just trying to find DDR3 10600R ECC RAM people want around USD $4/GB+shipping for USED on eBay! You can still get brand new Samsung 8GB sticks on Newegg for $28! ($3.5/GB) https://www.newegg.com/Product/Product.aspx?Item=9SIAB0Z5UE1934
I finally found a few 16GB sticks for just over USD $2/GB on eBay but the seller flaked out saying there was an inventory error... I figured it was too good to be true... Seems like USD $2.26/GB with free shipping is about the lowest price I've been able to find at a reputable seller. That is after a 5% discount if you spend $250 with them...ouch.
If you're using ECC DDR3 DIMMs, it's much cheaper.
Switching ISPs is a pain in the balls... at least I have a choice now. DSL vs Cable internet... Never had DSL nor have any experience with it. Anyone have any input? I'm hoping the junk equipment they give me can have the routing function disabled or some sort of bridge mode because I'm not giving up my PFsense router...
I believe they use ZyXEL modem/router combos. I have no idea what model they'll send on the 500Mb tier.
I've always paid cash for used phones. I can't justify spending $500-$1,000 on a stupid phone... I'd rather spend $1,000 on racing parts... or computer parts... or a years worth of beer
A little project I've been messing with the past few weeks (consequently the one I cut myself on)...a two 2.5" hot swap bay that will take the regular Dell caddies.
I kept everything as simple and serviceable as possible. I'm just using some pass through SATA connectors which can be replaced if broken or damaged
Then I made a custom power cable using some cable creation SATA power connectors and a micro Mate-N-Lock connector. The wire is some Turnigy silicone wire I had left over from building a multi-rotor which should be way overkill for the task...2 SSDs shouldn't pull more than about 13 watts at any one time. Now my other concern was if the motherboard would be okay supplying that. The drives do work but I'll have to do some more testing to be sure.
I have a couple older drives I can sacrifice to stress testing. I figure if I burn up a Gen I motherboard and a couple very old laptop drives during a stress test then oh well.
If you're wanting to make your own power cable for the DVD/TBU_PWR port the connector on an R710 motherboard, you'll need a 3mm micro Mate-N-Lock 4POL female connector (Part #794617-4) and the micro Mate-N-Lock socket connectors for each wire (Part #794606-1). Looking at the GP700 cable offered from Dell you'll be able to figure out where to place the red 5V wire, yellow 12V wire, and your black ground wires.Spoiler
Another FreeNAS related problem and solution...
If you're running FreeNAS in an ESXI VM and your jails aren't pulling a DHCP address, make sure to enable "Promiscuous mode" inside your virtual switch setup.
If anyone is using iSCSI with FreeNAS and are having issues with their drives randomly disconnecting or getting "no ping reply (NOP-Out) after 5 seconds" messages try adding the following under system then tunables...
Variable = kern.cam.ctl.iscsi.ping_timeout
Value = 0
Type = sysctl
Haven't been able to track down a root cause but it's been 8 days and still no issues.
Handling cut sheet metal is like dancing on the head of a snake. No matter how much care you take, you're going to get bit you just don't know when. I guess the real modding doesn't begin until the bleeding begins...
EDIT: Before you say anything... yes I had gloves on