Jump to content

Kadah

Member
  • Posts

    33
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Location
    California
  • Occupation
    It Manager

System

  • CPU
    Dual Xeon X5679
  • Motherboard
    HP Z800
  • RAM
    96GB
  • GPU
    GeForce GTX 970
  • Case
    HP Z800
  • Storage
    Too much to count
  • PSU
    1100W
  • Display(s)
    Many
  • Cooling
    Air, much air
  • Keyboard
    Cosair K70 with cherry brown switches
  • Mouse
    Orgnial MS Wheel Mouse
  • Sound
    Asus Xonar DGX with modified drivers
  • Operating System
    Win7 + others

Recent Profile Visitors

724 profile views
  1. Mistakes were made Trying to do too many things in the same window. No planning or strategy. Seems like it was mostly a general plan of action and them figuring out the details during. That's fine if Monday isn't a thing. Replacing core network hardware but not configuring it beforehand when there was no time constraints. Not labeling everything so not to spend lots of time looking for a cable or port. Not just removing the door of the server rack. Usually those just easily lift off for this exact reason. Prop the server room door open and get that keyboard out of the way. That room is painfully small and it's like they tried to make it harder. If servers are getting dusty, fix air filtration. Unless reorganizing a rack, remove servers from service and the rack for maintenance and inventory one at a time. Don't further complicate everything by doing all at once plus lots of other stuff at the same time too. Kinda goes with the first point, do the different things in different maintenance windows. If you have A thru Z tasks which could be independent of each other to varying extents, then you start A to Z all at once and get stuck on G and M for a very long time, you'll have to still finish and put back in to service the other 22 as well before Monday.
  2. OEM Microsoft licenses do not have transfer rights, the license is locked to the PC and MS defines that as the mobo for Windows 10. This is why OEM licneses are cheaper, they have this restriction. (If you upgrade from an OEM license of 7 or 8 to 10, that 10 license will also not have transfer rights.) Retail Windows licenses have transfer rights, though if you reactivate too many times by MS's determination and hyper complex rules, they will stop you and require you to buy another license. Microsoft licencing is overly complex for business use, this may be intentional. Linus's use is an edge case. My guess is that what he needs a per user volume license, BUT the reactivations on that get even more complex. The only correct answer here is "talk to a MS licensing partner then blame them if you ever end up out of compliance during an audit after licensing how you were instructed." At a previous job, they got audited by MS and were found out of compliance by around a dozen devices/users on their volume license and were fined over $2 million. I don't know if MS still does those audits now, but making sure things remained in compliance after that was a massive headache. One gotcha about MS volume licensing is that its more expensive than buying individual licenses, and its more complicated than individual licenses. Unless you have 1000's of workstations, it will not be worth the hassle, so just buy your machines from your OEM with Windows pre-licensed. OSX is licensed not to the user but to the device. If you want to virtualize OSX, its allowed by the OSX terms but you have to do it on Apple hardware, which is a huge pain in the ass for software development and testing as Apple killed the xServe forever ago, Mac Pro haven't been viable for years, the Mac Mini might be dead, and new Macbook Pros are a terrible choice for actual computational workloads like a build farm.
  3. "Lessons relearned from this that are things people generally say do to but don't till something like this happens" Always have a scheduled or continuously running backup. At least a weekly backup if a backup cycle takes longer off business time. Have regular offline/cold backups, preferably somewhere not in the same building. (A fire will quickly teach you that one) Raid across raid cards, let alone as a stripe.... not a good idea. Trying to do enterprise level storage on way less than enterprise storage prices will often lead to this level of headache or complete disasters. Scaling in ways that seem like cheap shortcuts to higher performance and/or greater capacity using cheaper consumer drivers on low end enterprise hardware only seems like a good idea till 11am when it completely implodes due to a single failure somewhere. Enterprise storage: Fast, large, not over $100k; you can only pick two. My super simple guide on how to set your budget for a raid and backup solutions. Raid/redundancy budget is equal to how much downtime is worth. Estimate should up to how much money you'd loose in the downtime it would be to get replacement hardware and restore from backup plus the cost of lost work since a previous backup (eg. up to a week if doing weekly backups) Backup budget is equal to how much your data would cost to recreate what is needed. This is usually a very large number and can vary wildly depending on the sort of business. Chances are that unless you are constantly creating new data at 100% throughput from 9-5, incremental daily backups that finish before 9am the next day, and full weekly backups over the weekend, should be entirely possible nowadays for not insane costs. Personally, I do like the way LSI handles raid configs. I've had far too many issues with "import foreign config" in the past. One LSI card = 1 to 2 simple arrays with no additional layers. For anything large scale that must have hardware riad, I'm gonna use an HP SmartArray, till a single adapter but use expanders and RAID 60. Though in recent years hardware raid is becoming supplanted by HBAs and software defined raid (usually ZFS) for large data stores spanning more than a few drives, and depending on drive mirroring and parity for uptime resilience both getting supplemented by mirror nodes.
  4. One issue with the Pi Zero's is trying to buying more than one of them from an official reseller. If you get them through Adafruit, Sparkfun, or a local Microcenter, they are only going to allow you to purchase one at $10 price point. The random Amazon sellers tend to list the Zero's for stupidly high prices. Adafruit and Sparkfun will only allow one ever, ie you can't just other another later. I've actually has Sparkfun apologize for having to modify my order to remove a Zero W I had with some other stuff so they could comply with the RPI reseller agreement (though they still gave the order free shipping even though it was now under that dollar amount). Microcenter will sell you more than one, but you'll be paying $15ea for 2-5 and $20ea for 6+. This is a good way to get more than one, but Microcenter locations are very random. Not sure if they enforce the pricing if you buy more later on, I'm testing that now. Though if you have a local Microcenter, check their online store, they seem to have the Zero W on sale right now for $5. Microcenter has this habit of selling some items near or below cost randomly.
  5. I'm finding that 120GB SSDs is starting to be too small for Win7 office workstations when the only local user data is email. 58GB for a boot drive seems unlikely usage for sure, but for a small database or cache, seems like an option. I've got to support an 2016 versioned app that uses a database type last updated in 1995 (Paradox). DB is only a few GBs and the only way to speed it up is throwing even more disk IOPS and faster clocked CPUs at it. A 58GB Optan could be suitable for that.
  6. It will be purely marketing. Most likely it's using a stock, but higher end, bluetooth chipset, or could be using the wireless system used on the high end wireless mice (no idea what those use, likely also something bt based). To get some custom radio solution certified would be far too expensive and time consuming for first off product in an unknown sized market segment.
  7. Processing time on both ends adds delay. Signals have to be encoded and decoded.
  8. I agree. I'd seriously consider it if it was MX Browns (or at least an equivalent). LED backlighting would be really nice too and I would trade battery life for that. Doesn't have to be anything more than bright enough to see the keys in the dark. Sadly I'm likely back in the market for a new keyboard that isn't made by Corsair. Corsair's customer service is non-existent.
  9. My 90's Logitech wireless keyboard would last at least 2.5 years on a set of regular AA batteries with lots of daily use. Does this one have n key rollover? That was one of the reasons the few wireless mechanical keyboards that have been made before were market failures.
  10. Power rating varying depending on input voltage is normal on server PSUs. Same amps at 220/240V is double the power vs 110V.
  11. I've gotten this too a few times, but I don't remember at all what I did to fix it, its some missing config I think. Googling should turn up something helpful, that's what I did each time since I kept forgetting when I did the last setup. Also check it with http://mxtoolbox.com/ That might help in narrowing down the problem.
  12. Those are nice, I've got a couple of them. Look for X5650 or X5660 Xeons, they are not expensive and plentiful. I run a lot of these, better than X5500 series.
  13. I'm going to do the same too because I found this out the bad, expensive way. Despite what MS says about R2, AD cannot coexist with any other major roles petty much other than file service. And its it recommended to have the hypervisor host running only Hyper-V role, which makes means more resource waste vs ESXi/vSphere. You can run two VM instances of Server 2012 R2 with a single license on a single host regardless of the hypervisor. I've been looking at standalone free ESXi vs Hyper-V for a single host with two 2012 R2 VMs and 1-2 Linux VMs. I could not find any upsides to going with Hyper-V for this, either it was going to be the same or worse with Hyper-V. The i3-4170 supports VT-x. 16GB would be good.
  14. I used to do it all the time, including production business use cause no budget and such at the time. It was really dumb, and caused numerous issues later on, but it worked mostly for 6+ years. Now I've can get budgets (after 6 months of requesting...) and have an entire rack for vSphere. Whitebox ESXi either works or not, pretty just has to try it and see. You can try checking 3rd party HCL lists for whiteboxing and for any driver packs that might help, but the one I used to use hasn't been updated since 4.1. No idea what OP's actually intended load is yet, so no idea what would work. You can get away with a lot on small deploys.
  15. The main issue is that ESXi likely does not support that mobo and won't be able to make a datastore. Whiteboxing ESXi is hit and miss, mostly miss.
×