I think it's time that I did my second blog entry ...
For the full version, please look here!
Hello there! Thank you for coming to read this blog post. I understand that you were interested in seeing what the situation is. But, before you commit to this, I must warn you. What you are about to read is quite lengthy, and will be best taken in when you're seated, with some decent time to spare.
Firstly, let's discuss the use of Google Drive. I currently have three Google accounts, each with their own 15GB Free plans, to manage differing sets of files. The primary one manages files for both College, major projects (TXP-VOS, TXP-Cloud, etc.), small media files (YouTube channel assets), and current finances. The other two are responsible for managing OS installation images, game ROMs, larger media files, and other sensitive files that are too important to discuss the contents of publicly (and thus they shall remain a mystery to you the reader). I also have a MEGA account, which is used to compliment and reduce the load on my Google Drive accounts, should I need to either make another file mirror or make large files more accessible when they are too big for the 15GB Google Drive storage. But, there is at least one issue with this setup - what happens if the Internet goes down?
Secondly, let's talk about the Internal Networking issue. I currently have two MacBooks (a MacBook4,1 and a MacBookPro4,1), a Windows laptop (HP ProBook 6475b), a Windows workstation (Precision T7500), and a decommissioned 2-in-1 laptop (tx1305us). The MacBookPro4,1 has thermal issues, needs a re-paste, and is running with a Core2Duo in High Sierra. It acts as the only local mirror for the online services listed above, and its AirPort/Wi-Fi adapter doesn't even work in High Sierra. It was brought up to date using third-party means, and all the care in the world won't save it if Apple decides to eliminate support for machines with either Core2 or DDR2 (which wouldn't surprise me).
It was originally meant to act as a more powerful alternative to my smaller MacBook, running El Capitan. Anything that couldn't be run on the poly-carbonate one could be instead loaded onto the more powerful, and more spacious, MacBook Pro - or so I thought. The better on-paper specs did not result in noticeable, real-world performance deltas that I could reasonably take advantage of. And High Sierra is a RAM hog, unlike its predecessor El Capitan (which gets so unkindly called El Crapitan, although it runs better than High Sierra ever will on low-end machines). But it's there as a necessary evil. Due to the task the MacBook Pro is assigned, it is a liability for it to ever leave the house in most situations - which leaves it acting as a hot-box of a file server.
Thirdly, let's discuss project management and productivity. The black MacBook is used for programming/code management and video editing. Not due to performance (a Core2 is not high performance by any means), but because of reliability, and monetary concerns. The tools that I've used on Windows, thus far, have been very unreliable with the MP4 video that I export and upload, causing audio to come out of sync with video, lowering the resolution unexpectedly, and more. They also happened to be janky to use, and a pain to find the functions/features I needed at times - which only serves to decrease my productivity. In addition to this, anything that doesn't do this will usually cost me enough to simply warrant buying still better software, like Final Cut Pro and the likes. And as for code editing, the one time when I did have a horrendous issue (corrupted project file - forced to rebuild everything from scratch) was while using Windows, on what should have been a more-than-capable Core i5. So, performance is not my primary concern in this arena. It is consistency, reliability, and cost. As such, the heavy video editing and programming tasks that I do tend to be saved for the MacBook(s). It also happens to have the benefit of being portable, for when I have to go out in the middle of a work session. The HP ProBook is a decent remote access terminal/console. It's used for general tasks when I'm outside the house, that can't be accomplished (or done well) in either MacOS or Android. It also can take an eGPU when I want to game away from home (and don't wish to remote access the T7500). It's a versatile remote console for many of my purposes, and has been the one device to survive through my high school years.
Next, let's go over Task/Service Balancing across devices. The Precision T7500 is the workstation/makeshift server that handles the worst tasks I do. Anything that is simply too much for the laptops gets tossed onto this machine. And it handles many tasks. In addition to acting as an optional DNS server, it also handles tasks such as Plex Media Server, storing backups for multiple devices, syncing content from Steam and other game stores, occasional F@H, SVP4 Pro, and HD livestreaming. I was supposed to get into hosting multiplayer gaming servers and movie nights (via my Plex server) later this year, but I'm delaying that until I have the resources to do that properly. As of now, the T7500 is the most important computer in my fleet, with an expectation of near 24/7 uptime required of it (same for the MacBookPro). A single minute of downtime spells trouble for most of the tasks I do, which is where part of the issue is - it's a workstation trying to be a server. In addition to that, only the Windows PCs are actually equipped with 1Gbps Ethernet. The MacBooks are stuck with 100Mbps, which is a pain for downloads and transfers of large files. Nevertheless, I would totally go for a 10Gbps router/AP to replace my current router/extender, because that thing is 100Mbps - which is garbage In fact, it's the only reason that I've currently settled for using a "Fast" Ethernet switch at the moment - because the majority of my devices aren't even Gigabit-ready. So, I already have a network limitation on my hands.
And all of this leads to the discussion of Overall Ease of Management, Maintenance, and Servicing. Anyone who has seen my numerous status updates in recent past have seen a glimpse of what it's like to use and service (and possibly even initial setup of) these machines while they're at normal operational capacity. The few hiccups and glitches that I do reveal are primarily associated with machines that aren't the workstation (which is a good thing). And while the status updates make it look like it only takes a minute, it's actually quite the opposite. I manage inventory for spare parts and thermal paste. I watch regular performance counters for signs of possible dying components. I install updates and patches, along with BIOS updates. I even update individual applications if necessary (and each computer is rockin' more than 40 apps each). And then I test for basic functionality right afterward, to make sure that nothing broke as a result of any recent changes. This kinda leads to some disappearing weekends and holidays, which I grew to accept as time went on. Seeing the issue yet?
My entire fleet is a scattered mess. Heterogeneous is putting it lightly, since the amount of repair and management effort required to keep things going is quite large, and the returns on it are small. At this moment, I could have up to 6 dissimilar devices that need to be plugged in at any moment, to fulfill a number of tasks that probably should be consolidated to one, more powerful, more efficient machine. When I go to take care of one, I get up and walk to wherever it is in the house, plug it in, and get to work. Then, when I get to a good stopping point (not necessarily done), I get up and go to the next machine on the list. While remote desktop access does reduce this a little bit, it's still easier to work on the machine when it's right in front of you in some cases. Especially when troubleshooting weirder issues, that may involve faulty hardware. The management costs increase as these computers age over time, and typical software support cycles for Apple laptops have it set so that almost half of my current fleet will possibly be obsolete in the next major version change. Triple-booting the workstation isn't an option because that causes unnecessary downtime, and can spell trouble if something goes wrong during an update for one of the OS's. The only reason why I could even attempt to get away with this is because the workstation has multiple drives, but then that introduces the issue of unavailability during power cycles, which are almost 5 minutes and up for this T7500 (yes - it's that slow to boot). And to top it all off, just try calculating to power requirements for them all running at full power on a bad day...
Back in 2018, I was experimenting with ESXi v6.5, on the T7500, which gave me a glimpse into a solution to everything. Using a single virtualisation server to consolidate all of these tasks would decrease maintenance times, costs, and effort. To the point where I could actually go back to having completely free weekends and holidays for the better part of the year again. A decent enough Type 1 Hypervisor could replace the MacBookPro4,1, the Xeon X5472 (pre-deployment testing) rig, and even my current workstation (T7500) with a group of convenient VMs, sitting behind a KVM switch for ease of use. And, possibly with less power draw than the combination of all my currently-running 24/7 machines as of yet. Most modern options that can run ESXi 6.7 also happen to come with 10Gbps NICs built in.
With my realisation of this reality a few months ago, I decided to start a new (and currently shelved) initiative - Project Personal Datacentre. And the worst part is, the internship that my parents and I were pushing so hard for stuck me on a wait list. So, wasted time - gone. I'm now left holding the grenade. Gotta make the most of it...