Jump to content

bloodthirster

Member
  • Posts

    114
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Interests
    music, computer things, photography, collecting skulls for the skull throne
  • Occupation
    professional something

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bloodthirster's Achievements

  1. Have you checked the logs? If the logs aren't showing anything then chances are it's a HW issue and then you need to start to trouble shoot there. Also that hibernation should just be a setting to allow the OS to hibernate if it wants. If the OS isn't initializing an hibernation then it doesn't matter.
  2. What? PA was a shit show when released but it's usable now. I get nice clear audio out of my Schiit. I can't think of a time I've personally had messed up audio that was SW caused. In the over 20 years I've been using Linux I've legit kernel crashes countable on both my hands, outside of things that clearly aren't the kernels fault, like bad HW, modding the kernel etc. Most of those were in the earlier days as well. The kernel is pretty damn solid. The BSDs are built as a complete package, but Linux has had a lot more time and money put into it in almost every aspect. It's important not to downplay that. Just look at kvm vs bhyve. How many more features and processors are supported in kvm than bhyve? Also that's one program that works better in BSD. You can literally find tons of Linux programs that don't even build or run in BSD. That's extreme cherry picking. You fall to mention that in BSD there's isn't as much documentation or the smaller subset of SW on it or the lack of a lot of drivers or... BSD has a lot of negatives as well. I get that you like BSD but there's much better arguments for it than cherry picking issues, using strawmans and other weird reasons. Simping for an OS doesn't do anyone any good.
  3. I have a Honeycomb LX2 and have had issues with: - warm boots don't work (later board revisions fix this but Solid Run won't update old boards like mine) - SATA has signaling issues so you're maxed out at 3Gbps, at 6Gbps there's so many errors the drive drops out - building firmware is flaky at best. I've built firmware before but could never get anything I built to boot, even following their instruction in docker and native - included heatsink needs upgrading if you're doing anything heavy It's a shame since if they fixed these issues I could actually recommend it here. I am still using mine as a NAS with a SuperMicro SATA/SAs controller running in JBOD with ZFS, but I wouldn't want anyone to go through the pain I went through. It was originally going to be an ARM test and dev system but because the warm boot issue it's useless for that.
  4. Not sure how to reply inline since there's no see markup button. - Yeah, sounds like a HDD or HDD with cache is what you'll need. - You CAN just directly connect computers. For the longest time I connect 2 computers to my NAS via an X540-T2. A switch will make your life a lot easier though. - The networking stuff you need to learn is through put of the connections. If something is over wireless and only has 144Mbps, that's hardly anything but if you have 5 computers connected over 10GbE that's something completely different. There's nothing that complicated. Just have to write out a simple diagram of the network with the speeds. See attached (I'm not an artist, sorry). You'd have 5+2.5+1 -> 10, depending on the setup and use case even 10+10+10+10 -> 10 would be fine. Just a rough idea of what your limitations are and what the future up upgrades will be and bring. Just something to be aware of. You don't need a CCNP or anything. - SW is rather simple. Most softraid on Linux is either MD or ZFS (or LVM). Both have benefits and disadvantages. There's unraid and truenas(zfs) as baked solutions. Look into those and see what fits you best. There's tons of info on both. It sounds complicated but it isn't really.
  5. Food for thought in not real order: - ECC RAM. Not strictly needed but always nice to have. - If you use ZFS, the more RAM you have for cache, the better. - Having a fast cache drive means the initial load time the same but following accesses are much faster (assuming it's still in cache) - That being said, if you have all SSD drives, you don't need a cache drive.... but you'll also be spending a lot more. Depends on your use case, how much you have to store and how fast you need to access the data - You'll need fast networking to see any benefit from a fast NAS though. A single SSD can saturate a 1GbE connection. You'll need to plan out your networking along with your NAS. Not point in having a fast network and a slow NAS or vice versa. You can't focus on any one thing. - How involved do you WANT to get? How much are you willing to learn? Personally I use ZFS/samba/nfs on Debian but that's not a solution for everyone. the NAS/file server topic is a can of worms but isn't that bad, just realize that you're going to have to do research and determine what you want/need. I'd suggest thinking about the speeds you want/need in respect to networking limitations and go from there. Once you know about the networking, then you can see how much you should be able to server, so then you start going into the NAS specs of SSD vs HDD vs layered etc. Have to think about it as a system rather than only individual parts.
  6. Why hardware RAID? You have to make sure the RAID HW card is going to be around when the card breaks or just buy a spare. If you can't, all that data probably can't be accessed since you generally need the same HW RAID card to access the data on the storage drives. Unless there's a really good reason to get a HW RAID card, it's another point of failure that can potentially leave you without access to the data on the drives. HW RAID can be good, but especially for one off type of projects, they can do more harm than good. I'd seriously ask the question if adding another point of failure (which can leave you unable to access your data) is worth the benefit you get from it.
  7. D780 and D750 are both good. D4s is awesome as well. A buddy of mine has a D750 and loved it before getting a D850 and loving it even more. I own a D4s and wouldn't trade it for anything. They're all great cameras.
  8. #1 Go to a camera shop and see what brand feels the comfiest to use. #2 Decide on if you want new or used. There's a TON of great used equipment for dirt cheap out there. #3 Generally go for full frame or the largest normal size sensor they have (FF for Nikon, Canon, APS-C for Fuji). Basically you want access to the best used lenses. #4 Don't get caught up in marketing BS and think you need the latest STUFF to take great photos and have fun. I have a Nikon D4s and wouldn't trade it for anything. Since they're so cheap I might try and get another one for backup. You don't need the latest and greatest to take awesome photos and mirrorless is NOT the magic solution to all your problems. Mirrorless isn't bad, don't get me wrong but a DSLR can take you a long long long long way in photography. Skill and practice are the most important thing for great photos, not the gear.
  9. I can offer you my KVM/qemu start up command if you want. On my Linux VM (in VB), I did a: aptitude install virtualbox-guest-{x11,utils,modules} I have 32MB of video RAM in my VB settings too (1 monitor).
  10. A Scot? Can you tell me the secret to understanding Wattie?
  11. Ok I reread the OP... The only thing I can think of is the VirtualBox guest drivers. I forget when exact ones I installed but they're googlable (out of the house now). I know those helped a good bit when I ran Linux on a Windows host. You could try qxl with KVM/qemu and see if that helps. It could also be storage is slowing you down. I'd look at disk usable to make sure you're not pounding the drive. You can test the individual systems (CPU, storage, graphics) of the guest vs host to see where the difference is too. Sorry about the scattered brain post before.
  12. I haven't used VirtualBox in Linux, but in Windows you need to enable SVM/VMX to get decent perf. That being said said, any reason you aren't using KVM or Xen?
  13. Looks lame and annoying. Of course, I hate JS with s burning passion so.
×