Jump to content

jde3

Member
  • Posts

    944
  • Joined

  • Last visited

Everything posted by jde3

  1. You need to also keep in mind here that Desktop Linux does not matter. At all. Nobody cares.. MacOS is Unix on the desktop. Get use to it. I know we all have our fav distros and stuff but in the real world desktop linux doesn't really matter. The only thing that matters is the server and that will be *and is now* RHEL.. a commercial distro from IBM for all case and purposes. RHEL and RHEL clones prob have 70% dominance in the market. - it's dangerous and a bad thing. I don't think we will ever go back to a major server distro being maintained by a volunteer community.. but Debian has the best chance of that as it has some business acceptance. FreeBSD is also volunteer run and it likewise has a lot of random deployments in enterprise. You have to understand the people that make these decisions are upper management and they don't know a bash shell from a ham sandwich. They just run what the Dell rep suggests to them or they seen some ad in a magazine when they were on a flight. Technical people are NOT in control of this question.. and if they were we all prob wouldn't even be using Linux.. we'd be using Illumos.
  2. As a sysadmin who has worked on Linux/Unix systems for decades I use.. FreeBSD on the server: It's remarkably simple, it's fast, it's highly flexable and it's got a lot of features Linux doesn't have or doesn't have well implemented. (ZFS, Jails, Ports, PF, Boot Enviroments and DTrace) FreeBSD makes a very nice production environment. Gentoo is the closest thing to FreeBSD in the Linux world but FreeBSD ships stable binaries as well as ports and it's build system is the same as it's package system.. They also do backports and it's backwards API compatible. So an install has a very long shelf life as it's easily upgradable and even FreeBSD 4 can be jailed on FreeBSD 13. Yes, it is Unix and not Linux and that means sometimes technicians need more training time but it is a far easier production world to live in and takes less system engineering time to maintain. Simplicity means solving problems faster and getting more work done. MacOS on the desktop: It *is* a Unix distro so this counts. Since I work on systems all day, I get very tired of working on them at home too. I want the least amount of headache on the desktop I can get and I've found that is MacOS. Being commercial Unix, there is a cost to it but I feel it's worth it. If I had to use Linux: I'd use Alpine and Gentoo due to simplicity and flexibility. (or Ubuntu if I just don't care and want something to work now) My least favorite distro to work on is RHEL, it's just needlessly complicated and stupid. I don't have to tell anyone here.. major version RHEL upgrades are absolute hell because they change so much of the system version to version and backport nothing. It has a few nice high level tools but that does not out weigh it's complexity.
  3. I think I read somewhere that only 8% of Linux code is actually made by individuals anymore and most of it is made by IBM.. through their shell company RedHat. Let me be very clear here... You _do not_ want Big Blue in charge of your OS. I've seen that world before and it wasn't good. Long live Ubuntu.
  4. Traitor. lol No, you don't need to use MS because it's "easy". This is how you get Azure AD, and everything else MS makes you get growing your IT budget to stupid levels. Zimbra can do 50 ppl email off a VPS for less than $5 a month.
  5. No, that was AT&T Bell Labs. - Yes, Linux was a response to that and.. It's a long story but you can get a first hand account here. Oracle though is prob the most sue happy company on earth. They are also the only company that I know of that "un-open sourced" an Operating system (Solaris),
  6. Same but replace PLEX with Jellyfin or Emby. Plex has privacy issues.
  7. Well you'll need a static IP. Oracle has a terrible reputation in the industry. (for good reason) - I'd still recommend shopping around.. but, if the price is right from Oracle.. : shrug : Just watch for loopholes and vendor lockin.
  8. One is an emulated GPU driver that passes commands to the real hardware. The other, passthrough, is isolating the hardware from the host to use the guest hardware drivers directly. I don't think so. If you really want MacOS use a Mac.. in the very least you'll want AMD graphics. MacOS VM's will be dead in a few years when apple drops support for x86.
  9. Like I mentioned before, any steam game is going to use the Ubuntu chroot shipped in steam. So 90% of all distros would be pretty similar unless you run steam with native libs.. at that point there is a lot more to tweak. You could try the "less is more" option (gain performance by having your computer do less stuff) like Alpine. Or you could take the compile time optimized "tight code" path of Gentoo or Clear Linux. If you just don't want to worry about it and want as little hassle as possible.. use Ubuntu as that is what Steam uses.
  10. Hey Linux Northwest, I use to go to that. Fedora is a horrible distro. It's bleeding edge RHEL code and all of IBM/RedHat's garbage ideas and expanding complexity (that is intentional to drive the RHEL support market) land there first.. I recommend avoiding it at all costs. If you really want to go fast.. Gentoo can be compiled for link time optimizations. (similar to Intel's Clear Linux) https://github.com/InBetweenNames/gentooLTO It works and it's fast but not for the faint of heart. You will have problems, this will break programs.. You also have to build all the steam libraries to use static built system libraries and not the Ubuntu chroot steam ships with. Phoronix (I know..) has some results on this. https://www.phoronix.com/scan.php?page=news_item&px=MTI5ODE It might also be possible to ditch the GNU bloat and use the Musl libc (like Alpine Linux does) but I haven't tried it yet. Both of those vectors tho would be as balls to the wall fast as you can go.
  11. Linux at one time was a clone of Unix, however in the past 10 years it's been drifting further away from that. Hopefully they will get back on track at some point and remember.. like Apple has, that being true to Unix's methods is a good thing and it offers easier interoperability across Unix systems. I don't hold a lot of hope for them though. Example: ifconfig has existed since 1983 and was released in 4.2 BSD -- Yet, Linux (actually it's the GNU) has abandoned this with ip (a command name that makes no sense because not all networks are ip based) -- Every other flavor of Unix that wanted to add changes or extensions to ifconfig have fixed ifconfig, (such as FreeBSD and MacOS that have wifi in ifconfig) where as Linux replaced it. It's not a good sign.. There are dozens of examples like this. It's a sad day when Apple, the maker of cell phones, has a more Unix like OS than Linux.
  12. This is OT some but how do you get a degree in systems engineering without learning Linux? - That feels strange to me and understand it's not a critique on you but on your school.. I'd set this as a #1 priority for you in self learning.
  13. In the old days ZFS may not be able to be imported in read write because of differing feature flags on the filesystem. Since the source code re-organization in OpenZFS 2.0 all OS's that support ZFS use the same source. So.. FreeBSD, Linux, Illumos (formerly Solaris), MacOS and Windows with modern ZFS installed should be able to import any pool. This is super nice and I think the only other filesystem that is supported by so many OS's is Fat32. It makes it actually plausible to use ZFS on USB disks.
  14. Do they call it QEMU? hmm.. (QEMU is an emulator and alone would take about 30 minutes to boot Windows 10.) About the world of hypervisors: Almost all hypervisors are built around Intel / AMD's vritulazation engines. These are the components that are doing a lot of work. There are the commercial hypervisors. VMWare, VirtualBox, HtyperV and there is also Xen still out there. They are very full featured but I'm not going to talk a lot about these as you can find information out there. Linux's is ordinarily called KVM and it is a combination of things. QEMU, Libvirt and the kernel bits KVM. Qemu runs 16bit boot code and other emulated machine features, it is a full emulator and can emulate x86 on arm and the like but for KVM it serves a minimal role as emulation is slow.. And Libvirt is the management tools for this for configuring the VM it's virtual hardware etc.. Everything Linux uses KVM. It uses several virtual disk container formats that are mostly loopback files on top of other filesystems. You can think of it like a bunch of different things that build up to a full featured hypervisor.. tho is has limited frontends by itself. Maybe Apache Tomcat would be a good analogue. BHyve is FreeBSD's native hypervisor and it can use libvirt. At least there is some support for bhyve in libvirt. It has no emulation layer making it a minimal or light weight hypervisor hence it's requirement for only booting EFI OS's and it's limited support for emulated (slow) hardware.. It's why you sometimes have a more complicated guest install.. because it only supports doing things the fastest way damn the torpedoes..The reason for this is it serves a different role were a simplified hypervisor is desired. "Just the hypervisor ma'am." It typically uses ZFS as a virtual disk container giving it fewer filesystem layers and fast direct I/O to the disk. It's also been ported to MacOS and Illumos (Solaris) due to it's more simple implementation (Perhaps there is a Linux port as well?). You could think of it like lighttpd or nginx. One is neither better or worse really than the other it just depends on your requirement needs and what you are trying to accomplish. It also really highlights the different design philosophy of the two OS's. Where Linux takes the layered approach building on dozens of existing technologies with lots of support and FreeBSD builds a purpose built custom performant solution.
  15. Top industry servers by popularity are Red Hat Enterprise Linux (or clone like Oracle or Rocky), Ubuntu Server and SuSE Linux Enterprise Server. My personal favorite is FreeBSD Unix. They would be managed remote via automation or shell and tend not to have their own GUI's. - Ubuntu and FreeBSD have no subscription cost. Proxmox, TrueNAS and Unraid etc are not really used much. (in serious industry) - Your choice in what you want to use depends on your intentions.. if you want to work in the industry some day you'll want to be familiar with the same stuff they use.
  16. The displays might not be as bad as you think. I've had 4k Dell displays and they worked just fine. If it's a standard monitor it's prob ok, but it's it a "odd gamer display" things but not be perfect. Yeah, Time Machine can mount a time machine disk volume inside a NFS mount. For a business you'll want each user with it's own mount point and automount.. perhaps also a way to deal with stale file handles if they are mobile. - These are all NFS issues as it expects a persistent connection.. it does work well tho when you have one. CIFS might also be possible, I never tried.
  17. For another perspective. - Finder does suck. It sucks less then MS's Explorer but it still isn't as good as Thunar and Nautilus. (that do run, but need X11) - Rando displays do sometimes suck. That being said the apple (or apple approved 5k ones) are superb. They have some of the best displays I've ever seen. - Never had a problem with the keyboard. EN-US works great. It uses more hot-keys than windows does but you'll get use to that an like them. - You'll want to check your hardware is supported. USB stuff like any OS it's particular to the vendor and if they support it. - Repairability is terrible, however.. Apple support is pretty good. I've had a system replaced 3 times by Apple for free. No chance of that in PC land. - Apple has vastly superior backups with Time Machine. - There is more UI consistency in MacOS than in Windows. Even MS's own apps look alien to each other sometimes. You aren't always limited to do things "the apple way" being Unix as I mentioned before it can run pretty much anything Linux can natively. Just install MacPorts or Homebrew and off you go. You can install all of KDE or XFCE if you wanted to. There is some benefit to having one company make the hardware and the OS. With a PC you could have half a dozen different company's make the hardware and associated drivers and they don't always play well with eachother.. On Mac, updates are seamless, there are no driver conflict problems with the system, updates never break hardware support. Apple even now would be more private than Windows. (Windows 10/11 is like a trojan.. the amount of data they collect is criminal, worse the OS contently nags you so it can collect more.) - Apple collects data too but it never nags you and nobody, not even Google collects more than Microsoft. Apple does not share the same ad generated revenue business model as they do so it does not make you the product they sell to their real customers. It's not perfect... but it sucks less than other alternatives. All computers have problems but as someone who's worked in the tech industry for 30 years and used dozens of OS's (most you've never heard of) all I can say is.. it has the least amount of problems of any current system out there. If it fits your use case, you will spend less time in frustration.
  18. Apple use to promote it's Unix'ness but does not anymore. Still all true tho, it is the best desktop Unix you can get.. it's just not free. Things like NFS mounts, Cron, rsync, PF, X11 apps, ssh forwarding, Bash or bourne shells all work as expected.. It's pretty much got the same userland as FreeBSD (tho it's a bit out of date from FreeBSD) - you can also install the GNU userland if you.. like that... sort.. of thing.. with random bizarre flags and switches on every command, idk maybe that is your thing. (MacOS also has DTrace so you can actually know what the f'in thing is doing when it goes sideways, unlike Linux)
  19. Security Patches: You want to make sure it's current and updated. Ubuntu is pretty good with auto-updates, you can enable them but you will have to reboot to do kernel updates (and you do need to do this.. prob monthly at least) Limit access: SSH should be set to not allow root login and use key based authentication only. Apache's (or other httpd's) configuration should be limited as well. Make sure file browsing is off. You can setup Let's Encrypt fairly easily for TLS. And that should be all you need to open on the firewall. You do not want to expose Samba or Syslogd to the internet (Jarsky you maybe want to check those rules? I trust you have reasons..) You might look into Nextcloud for photo sync.. It has a security check feature. Idk how much I trust it but it's better than nothing.. Docker on Ubuntu has some out of the box security problems. (a guest in a container can map root) You'll want to take care of that.. It's pretty dumb they ship like that but that is Linux for you.. (Ahh... I'm already missing the simplicity of FreeBSD's PF and Jails.. oh well onward with Ubuntu )
  20. Server fans are compact, spin at a high RPM and push a ton of air. As rack space and cooling is a concern in a server environment but noise is not. Some ppl wear hearing protection in server rooms as they can be so loud. Aircraft sometimes make less noise than a server lab to put that in a frame of reference. It's part of the nature of the product you have.. I would tell anyone for the home... never get a server, there in no benefit to owning one you can not obtain from a PC other than educational benefit for someone who wanted to be a lab technician.
  21. Yes.. iSCSI has some overhead (due to tcp), if you really need performance you might be able to do FC in the home.. depending on your budget. FC targets can back to zvols. Don't get me wrong, iSCSI is a good thing and solves a lot of problems on it's own.
  22. Yeah, I've worked with ZFS since it came out in Solaris 10. Sometimes my view of it does not reflect the typical home world use. It is good technology though and can be used in a lot of novel ways. I don't really believe you need a high level of knowledge to use it but you do if you want to do advanced tricks with it. So the SLOG. Due to it's transactional design ZFS can't "buffer up" writes.. it wouldn't be safe to do so. (how could it ensure parity of a cached write?) They either must be on the disk or not.. but what it can do is write them to persistent storage then reorder them into logical transactions groups and flush them to disk in batches.. that is what the SLOG really does. It helps ZFS do fewer random writes and there is a performance boost there. Also the SLOG does not need to be very large. Save most of the fast storage for L2ARC. +1 cmndr. One thing to add.. keep in mind a raidz configuration only has access to a single devices bandwidth at a time.. where as mirrored pairs (like raid 10) can access many disks at once. So the choice of mirrors or raidz will depend on your capacity or performance requirements. A raid 10 like layout is a pretty good middle ground. ZFS can be very performant but it takes some pre-planning.
  23. I still feel they are pseudo-features. The same thing can be done with ZFS You have a stripe on mirrors and hot swap standby disks (called spares in ZFS) that are powered down via the camcontrol timeouts. You can add more mirrors dynamically or convert the hot spares to mirrors to extend. You can automate any of these features using (zed) the zfs events daemon. Also is this *really* less work for the drives? Is it better to hammer one drive 100% of the time, or hammer a group of mirrors 20% of the time? Only the mirror that has the data needs to actually do work. It's less power sure but I wouldn't be so sure it extends the longevity of drives.. In a parity raid like Raidz, only one drive is actually doing work at any one time.. so it's balanced across all the hardware and ZFS also auto balances the data allocations (it's not necessarily a direct stripe across and all of the disks are used for parity, not just one.). - I don't know the answer to that question but I'm skeptical of the claims it improves drive life. Heat (and vibration) is what actually kills drives, not use, and using a single all the time sounds like a lot of heat in one spot.. Fun fact, did you know ZFS automatically balances and array to avoid parity on disks close to each other? This is done to prevent vibration hot spots in the array in a small enclosure. I think it's pretty neat they thought about that in the design of it. The L2ARC works with the ARC (L1ARC), it's the same thing it knows about a cache hit or miss in either the L1 or the L2 and can adjust itself accordingly and it does not use the god awful fuse implementation. (you know what the U stands for.. userland is SLOW.) - It feels like Unraid is using ducktape to try to match ZFS's native kernel based features. I've never used Unraid.. so maybe there are clever things in there but after speaking with you about it my impression is it's worse than I thought..
  24. Well it's good to know I'm talking to an expert. You never know around here. I'm not trying to be elitist or anything like that, I'm trying to teach novices. (Of which you are not.) You understand why I tell ppl to try and understand the mechanics below their beloved WebGUI, right? Granted I do agree with you that UI's have a time and a place. As for Unraid.. To me (someone that manages storage) It sounds like they have a lot of pseudo features.. what they call "Standby mode" is easily accomplished in ZFS as well (FreeBSD does it automatically / tunable via camcontrol timeout) and I highly doubt their cache is anywhere near as effective as ZFS's ARC in real world. It feels like they want to say on paper they have ZFS's features to fill a sales sheet, but just don't. An implementation like that based solely on LVM and MD parity stands a high risk for corruption and in my case it's highly impractical because I'd want the I/O bandwidth and iops of all the drives not just a few. And saying you can get "part" of your data back after a failure isn't a selling point I'd be proud of. I really feel they are tailored for the home user market. Even TrueNAS can't really cut it in enterprise. I had evaluated it last year and fount it's features for enterprise poorly implemented and lacking. (it's largest problems were it was broken in LDAP and SNMP and it forces you to use a single admin account for the UI. If those parts don't work.. why bother?)
×