Status Updates posted by Razor Blade
2/1/2016 - 2/26/2019
Rest in peace Samsung Note 4... Not even Odin was able to save you
@Imbellis Not able to really tell... may have been hardware failure or could have been an app that hosed something. I bought the phone used so no idea what the history was before I got it. Not to worry though, I transferred to my trusty S4 last night until I either get it repaired or replaced.
When people ask if PC is still relevant for gaming...Spoiler
Won auction for rack mount layer 3 switch on eBay for $33
10Gb module for said switch... $399.95 + shipping
Not bad for 10+ year old SAS drives in a RAID 10?
After fumbling around with ESXI 6.5 trying to pass through my old GTX950, I finally got it to work. If you're having a similar issue, hopefully this can help you.
First off, a disclaimer... This isn't a typical "fix" I normally would post and I don't know if all of this is necessary or not so make sure you check on your specific GPU to see if there is anything you need to do to enable passthrough properly instead...the following is more of a quick and dirty bypass to ESXI disabling the checkbox in the hardware menu and NVIDIA's hardware checking disabling the card if it detects it running in a VM... However there may be a good reason your GPU is prohibited from being passed through other than NVIDIA wanting you to purchase their enterprise cards instead...so...yeah...don't mess around with a computer that has all your data and stuff on it until you know this won't adversely affect function or reliability.
First issue with PCI passthrough you may run into. If your NVIDIA GPU is grayed out and you are unable to toggle passthrough...click this spoiler...Spoiler
First issue you may encounter is the GPU is grayed out under Host, Manage, Hardware. There is a cheat you can do to allow you to enable the checkbox using the page "inspect" feature in chrome or other browsers.
This enables the checkbox. Now you can check both the GPU and the audio device checkboxes and click the "Toggle passthrough" button at the top. Now restart your host. You should now see both devices you checked say "Enabled/Active" with the GPU now grayed out and checkbox disabled again.
Now either you have it enabled or you are now able to have it enabled. You should be able to assign the device to your VM by adding "PCI device" and select the GPU.
Now if you are running Windows with NVIDIA drivers and receive Code 43...click this spoiler...Spoiler
To fix code 43 in windows you will need to try to add the following parameter to the VM in ESXI...
hypervisor.cpuid.v0 = "FALSE"
Shut down your VM and put it in the parameter windows under "Actions", "Edit Settings", "VM options", "Advanced", "Configuration Parameters", "Edit Configuration..."
Then click "Add Parameter" and type it in so it looks like this....
After you add the parameter and click "OK" then "Save" you should be able to boot your VM and your GPU should work as normal.
I put it off and put it off... Finally I watched The Verge's PC build (a re-upload since the original is gone).....
Finally broke down and bought an Apple today...Spoiler
Actually I bought a bunch... Honeycrisp apples are freaking delicious...
and no this joke never gets old.
Over the past few weeks I've been planning on a mod for my R710 which included adding another 2.5" drive but wanted to make a cable instead of splicing into the OEM cable. There were a few reasons for this.
- The OEM optical drive cable has relatively thin gauge wire
- The OEM cable has a SATA cable molded into it
- The OEM cable only supports slim SATA connections (which would require an adapter)
So I set out to source some parts. Turns out TE Connectivity Micro Mate-N-Lock part #794617-4 connector and part #794610-1 socket pins are just the ticket to connect to the DVD/TBU_PWR port on the R710 motherboard.
I already had some 8 pin Micro Mate-N-Lock connectors laying around so I just snipped one of those connectors down... The above part number is for the appropriate 4 pin.
I also had a crimp style SATA cable from a modular PSU laying around. It was never used so the wires and connections are still in perfect condition.
I removed the 3.3V wire and crimped on the socket pins. I referenced the GP700 cable images found online to get the appropriate pinout. Going by the photos and how the OEM DVD cable is laid out, 5V positive and negative are at the top (closest to the tab) 12V positive and negative are on bottom.
Here was is completed cable
I'm not able to find much of a source on what the power draw of the Dell TBU is and I'm not sure what the capability of the DVD/TBU_PWR port is. For testing, I went with the "screw it" route... I used two 3.5" mechanical drives. Which the server seemed like it couldn't care less... No errors and nothing was even warm. The drives showed in the BIOS and hummed along happily for the few hours I left them during testing.
I'm going to be running a pair of SSDs so the power draw should be less than 3W per SSD which even if the SSD only runs the 5V rail (which it probably will) is still a bit less than the 5V combined rating on the mechanical drive's decals.
So hopefully this was informative to anyone that may want to make a cable for themselves.
I was wondering why a smoke detector ended up on my front porch with a note.... I'm not going to ask HOW a spider crawled into my smoke detector thereby setting it off randomly... all I know is that despite my best efforts to remove the spider threat...due to circumstances beyond my control, said smoke detector is no longer allowed in the house...ever
What did we learn today?
Sometimes being vague about the root cause of a problem might be a much better solution...on the bright side, I have a smoke detector to put in the shed.
Because just making fun of the Verge's build is too easy...
Lately I've been planning on upgrading the memory in my 11th gen Dell. Turns out it isn't really that simple. Seems like there are more considerations to take into account if you're going to be shelling out the money...otherwise sadness can follow. If you're looking into memory upgrades for your 11th gen Dell server maybe this information can help you...
First off for the 610 and 710 servers, here is a PDF to help you pick out a memory configuration.
(Second link is archived version on forum just in case the PDF disappears from Dell's site)
Archived link: server-pedge-installing-upgrading-memory-11g.pdf
The dual vs quad rank memory TL;DR:Spoiler
Understand that if you go with quad ranked memory you can only populate 2 slots per channel as the limit of ranks per channel on these servers is 8.
16GB PC3L 4Rx4 10600R
- only 2 sticks per channel for a total of 8 ranks (populating the 3rd channel will likely result in a memory configuration error on boot or even a no boot situation)
16GB PC3L 2Rx4 10600R
- all 3 channels can be populated for a total of 6 ranks
Memory speed: (source below: section 7 "memory" subsection 3 "speed")Spoiler
For the 11th gen Dell servers if it supports triple channel memory, each channel you populate will drop the speed of the memory. For example using one channel with 10600 memory will yield the full 1333Mhz but populating two channels will yield 1066Mhz. Populating all 3 channels will drop the speed to 800Mhz.
You may think that quad rank memory would run at a higher clock speed right? Nope... turns out it runs at 1066Mhz even when one channel is populated (see source below).
Now is it faster though? I'm not sure. Consensus seems to be that more ranks might be used for higher capacity modules? If that is true it wouldn't matter since though the 610 and 710 servers can run quad rank memory, they can't take advantage of DIMMs more than 16GB anyway. It seems as though quad rank modules *are* cheaper than dual rank modules (going by listings I've found on Ebay) so it could be a consideration...but even if they are cheaper, for the price you could get more density at the same speed using 8GB modules in a memory optimized configuration and be able to take advantage of the triple DIMM configuration as well as the higher RAM capacity overall (96GB vs 64GB). The only thing I can think of is you could save a bit of power if you are going for memory optimized configuration and don't need 96GB? Or maybe if someone already has quad rank DIMMs on hand already...
A note on UDIMMs: (source below: section 7 "memory" subsection 1 "overview")Spoiler
The 11th gen servers only support 2GB UDIMM modules up to 12 modules so you can have up to 24GB of UDIMM memory.
Power savings: (source below: section 7 "memory" subsection 5 "Low Voltage DIMMs")Spoiler
If you have a 5600 CPU it appears you can use the "low" voltage RAM that operates at 1.35V instead of 1.5V which could yield a power savings...though in a home lab where you may only be using one or two servers the power savings might be negligible (source below:"Low voltage DIMM power savings")...but I guess it is something if you're upgrading anyway. Either way it is nice to know that the support for low voltage DIMMs is there.
It appears that if you are wanting to upgrade your RAM capacity, figure out what you want and go with the highest density modules that fit in 3 dimms per channel per CPU for best performance.
For example if you want 48GB of RAM, you could populate all 3 slots on 2 channels across the dual CPUs with 4GB sticks of 2Rx4 10600R but your server will run at 1066Mhz memory speed where as if you populated one channel with 8 GB sticks you'll be able to take advantage of the full 1333Mhz speed supported by 5600 Xeons.
Is it important? Does it matter? probably not...but since these poor dinosaurs of a by-gone age of Westmere goodness are already at a disadvantage, why not squeeze the max performance out while they're still worth plugging into the wall?
If you already have the DIMMs you need to reach your RAM capacity it might not be worth spending a bunch of cash upgrading RAM just to get the higher clock speed...or it could be I guess it depends on your use case...but if you're ready to upgrade your RAM, It is probably worth considering upgrading density instead of quantity.
(Second link is archived version on forum just in case the PDF disappears from Dell's site)
Archived link: server-poweredge-r710-tech-guidebook.pdf
Low voltage DIMM power savings:
Dataram's article on 1.35V vs 1.5V DIMMs: http://www.dataram.com/blog/?p=102
Tom's shareware testing of HyperX LoVo: https://www.tomshardware.com/reviews/lovo-ddr3-power,2650.html
Might edit this in the future if I have more info to add or info to correct....
Why is RAM still so expensive? Just trying to find DDR3 10600R ECC RAM people want around USD $4/GB+shipping for USED on eBay! You can still get brand new Samsung 8GB sticks on Newegg for $28! ($3.5/GB) https://www.newegg.com/Product/Product.aspx?Item=9SIAB0Z5UE1934
I finally found a few 16GB sticks for just over USD $2/GB on eBay but the seller flaked out saying there was an inventory error... I figured it was too good to be true... Seems like USD $2.26/GB with free shipping is about the lowest price I've been able to find at a reputable seller. That is after a 5% discount if you spend $250 with them...ouch.
If you're using ECC DDR3 DIMMs, it's much cheaper.
Switching ISPs is a pain in the balls... at least I have a choice now. DSL vs Cable internet... Never had DSL nor have any experience with it. Anyone have any input? I'm hoping the junk equipment they give me can have the routing function disabled or some sort of bridge mode because I'm not giving up my PFsense router...
I believe they use ZyXEL modem/router combos. I have no idea what model they'll send on the 500Mb tier.
I've always paid cash for used phones. I can't justify spending $500-$1,000 on a stupid phone... I'd rather spend $1,000 on racing parts... or computer parts... or a years worth of beer
A little project I've been messing with the past few weeks (consequently the one I cut myself on)...a two 2.5" hot swap bay that will take the regular Dell caddies.
I kept everything as simple and serviceable as possible. I'm just using some pass through SATA connectors which can be replaced if broken or damaged
Then I made a custom power cable using some cable creation SATA power connectors and a micro Mate-N-Lock connector. The wire is some Turnigy silicone wire I had left over from building a multi-rotor which should be way overkill for the task...2 SSDs shouldn't pull more than about 13 watts at any one time. Now my other concern was if the motherboard would be okay supplying that. The drives do work but I'll have to do some more testing to be sure.
I have a couple older drives I can sacrifice to stress testing. I figure if I burn up a Gen I motherboard and a couple very old laptop drives during a stress test then oh well.
If you're wanting to make your own power cable for the DVD/TBU_PWR port the connector on an R710 motherboard, you'll need a 3mm micro Mate-N-Lock 4POL female connector (Part #794617-4) and the micro Mate-N-Lock socket connectors for each wire (Part #794606-1). Looking at the GP700 cable offered from Dell you'll be able to figure out where to place the red 5V wire, yellow 12V wire, and your black ground wires.Spoiler
Another FreeNAS related problem and solution...
If you're running FreeNAS in an ESXI VM and your jails aren't pulling a DHCP address, make sure to enable "Promiscuous mode" inside your virtual switch setup.
If anyone is using iSCSI with FreeNAS and are having issues with their drives randomly disconnecting or getting "no ping reply (NOP-Out) after 5 seconds" messages try adding the following under system then tunables...
Variable = kern.cam.ctl.iscsi.ping_timeout
Value = 0
Type = sysctl
Haven't been able to track down a root cause but it's been 8 days and still no issues.
Handling cut sheet metal is like dancing on the head of a snake. No matter how much care you take, you're going to get bit you just don't know when. I guess the real modding doesn't begin until the bleeding begins...
EDIT: Before you say anything... yes I had gloves on
Me: Nah I'm good
P.S. it is enabled by default...
Not sure it will. As far as I know Spybot antibeacon is more aimed at killing the telemetry and this setting is part of Windows Defender.
Thing is, it doesn't really say when and what files it picks. It just says "send sample files". Does this mean it picks random files from your computer and sends them? Or when it finds something infected does it send that? It doesn't say. There is absolutely nothing in their privacy statement about this "feature" either. Which means it is just under the general privacy statement that includes these...Quote
How we use personal data
- Personalize our products and make recommendations.
- Advertise and market to you, which includes sending promotional communications, targeting advertising, and presenting you with relevant offers.
I know why they call it "Windows". Because they can see right into everything you're doing. Meanwhile, I'm trying my hardest to put up some damn blinds.
Have spare drives, messing around. It's a race to see which VM will get all the updates installed first...
Seems the winner is clear... though to be fair the RAID array does have 4TB and redundancy where the SSD only has 120GB. I'm actually pretty surprised... I didn't expect the low end SSD I ended up getting in an old ebay laptop to do so well.
I understand the need for companies to collect relevant data to market products to consumers. But it seems like you can't even take a dump without companies wanting to know about it.
I don't normally post crap like this...but seems like a heck of a deal for people looking for a 1U server that can take 3.5" drives... I already have a Dell R410 but found this while browsing eBay. This is not my listing BTW.
If you want to add another CPU you need another heatsink, I've seen those on Ebay for around $10...
- Show previous comments 2 more
@Psittac looks like the seller will ship or have local pickup not sure if they'll ship out of the US though...
To add to the previous post @dizmoThe thing is its a great deal but the r410 is pretty limited due to its thin form factor. There is only one PCIe x16 slot and one slot for the modular HBA or Raid cards that came with some of them. It does have like 4 or 5 on-board SATA ports too so I was kind of toying around with the idea of picking one up to stuff in an ATX full tower to make a low budget NAS.