Jump to content

MG2R

Retired Staff
  • Posts

    2,793
  • Joined

Everything posted by MG2R

  1. Your server is in Canada but your audience is in Bali. That’s your problem. You’re crossing an ocean for each request. Find a web host that’s local to you. Even running a simple VM in a datacenter there and you’ll already be better off than whatever solution you’ll make at home. Your Google score goes down quite fast if your website is unreliable. Get a professional hoster in your area and don’t look back.
  2. When you say things like “my server will pay for itself”, keep in mind that there’s a lot of hidden costs to hosting your own solution: - electricity: running 24/7 web service with a 12400 machine costs anywhere between 50 and 250 USD yearly depending on your rate - opportunity cost: running something 24/7 means you’ll be spending time keeping the system going, updated, and secure, which you can’t use for whatever it is you make money with - downtime costs: your simple server on your simple residential connection will not be up nearly as often as any proper hosted solution. Think about the impact of your website being down. Is it problematic? Well, congrats you’re now 24/7 on-call to keep the system going. - uptime costs: to mitigate downtime, you’ll want highly available servers, power, internet. That costs a lot of money take it from someone that has worked in the industry: web hosting is expensive if you want to do it right. The margins are terrible. Now, if this project is not something that can never go down and you simply like playing with technology, by all means go for it. I do it too for those reasons. But if this is something that is in any way required to be up 24/7 or your world burns down, think twice and host with a professional hoster.
  3. Came in here to say jb weld. It’s already been said. I’ll just say it again: jb weld. my motorcycle has a hole in the engine patched up with jb weld. Good as new
  4. Thanks! I do remember those privacy policy changes around that time. I must’ve missed the commotion surrounding them. their yearly transparency report is all I need. Reading that they received X requests for some data but were unable to hand it over because they literally did not have it is superb
  5. What story did I miss? Proton is the one provider that convinced me to switch from self-hosting my e-mail. That was a total pain in the ass but the only way I saw to keep my data private, until proton showed up. been a paying customer for years now, so far no regrets. Only downside is the bridge you need if you want regular imap and smtp. But that shows the encryption is real.
  6. Your data, and more broadly your private life, is yours and yours alone. The argument you’re making here is a rephrased version of “I have nothing to hide, I’m not worried about privacy”. However, the point is never what your data is telling about you to the people around you _right now_, it’s about what people in the future might think about your future self based on data about your current self. To exemplify why this matters, consider a European country in the 1930s where citizens are mandated to register their official religion with the local government. The reasoning for this is that it would allow for a proper burial in the case you were to die without family member. The state will give you a burial according to your specified religion. Neat right? A couple years later, Nazi Germany walks through the door and finds this handy list of all Jews in the country. Through no fault of their own and without “doing anything wrong” or “having something to hide”, all of a sudden their data is used against them. Citizens betrayed by data mining for the best intentions imaginable by their then-government. A tragedy. The data big tech compiles about you does not stay with big tech. It gets shared broadly with government agencies around the world. It gets leaked through cracks and attacks. It gets lost and mishandled. Privacy is a right which must be fought for continuously. Once given up you’ll never be able to claw it back. If you really don’t care about privacy, hand the password to your mail account to your colleague and let them go through it all. Sounds off, doesn’t it? Guess what outlook and gmail do.
  7. If it's local only, check out NoMachine as well. Performance is quite a bit better than teamviewer. Good enough to do game streaming, actually
  8. The exact model of CPU is printed on the heatspreader CPU support for the motherboard is listed on its support page: https://asrock.com/mb/intel/h97 pro4/
  9. So first off: `mdadm -E` should be done on individual drives, not the logical array device. See the manual: -E, --examine Print contents of the metadata stored on the named device(s). Note the contrast between --examine and --detail. --examine applies to devices which are components of an array, while --detail applies to a whole array which is currently active. So, you're kinde screwed. Both sdg and sdh are reporting errors. sdh has the most recent errors, so that's probably from when you experienced the problems the first time. However, sdg reported error from 4000 hours ago. That's roughly half a year ago if they're running 24/7! Did you have any monitoring on this whatsoever? I'm going to go ahead and say you're quite probably in big trouble. Chances are minimal (almost zero) you'll be able to recover from this, if there's actually two corrupt drives here. To make sure that's the case, please perform `sudo mdadm -E <drive>` for each of your 4 individual drives. Aslo, please do the conveyance tests on each drive. Your smart data shows no tests have run for sd{g,h,i}, only sdf has had tests run previously.
  10. Work from the ground up. Stop all services using the array Unmount the array Stop the array use smartctl to run a conveyance test on each drive, verifying the drives individually are still healthy If at most one drive is unhealthy, start the array verify the array itself is healthy with mdadm If it is healthy, verify the filesystem on top of it is healthy ( which file system are you using) ”turning it off and on again” is only useful if you don’t care about diagnosing the problem. With storage you ALWAYS care about diagnosing the problem mdadm is a kernel system. it’s possible there’s pertinent information in your kernel messages (dmesg) which get cleared on boot. I hope you have your system set up to capture and persist those on disk. Look under /var/log/kernel also have a look at /var/log/messages or similar (journalctl?) for clues on your error chances are the raid is fine but the filesystem is borked due to a comms error with the drives during writes. You usually can recover from this with minimal data loss if you work methodically when it comes to data management, ALWAYS think and understand before taking intrusive actions like re-builds or reboots. make backups. For the love of god make backups
  11. If you're already used to running Linux without special TrueNAS or Unraid sauce.. why go for the more "magic" solution? You can run your gameservers directly on Linux.
  12. You can mirror each set of drives and then JBOD/stripe across those two volumes. ZFS, BTRFS, LVM, storage spaces, unraid all let you do something like that
  13. It'll be heaps faster than your current solution for sure. Go with option one. If you do end up outgrowing the memory capacity: DDR4 is pretty cheap second hand. I think if you shop around you'll be able to get option 1 with another 8 GB stick of RAM for the same price as option2, making it the clearly better solution.
  14. Anecdotal, but I used to have a gaming rig that had sat unused for literal years with water in it. Still ran fine. @Whiskers do you know if it stil works now? For a more general statement: I think your answer here depends on a LOT of different factors. Off the top of my head: pump type, coolant type, additives. Evaporation rate probably matters too, as that will increase the relative amount of contaminants in the loop.
  15. Also, set up weekly scrubs and an automated way of alerting you of failures. I've found it advantageous in the past to add a hot spare to the zpool. That's how I'm running currently. Weekly scrubs which will immediately trigger the hot spare to be used to resilver at the first io errs on the drives.
  16. Then how do you not have backups already? Either rent a fireball (or snowball), or use the money you would spend on that to get some more drives and move the irreplaceable data over. With that kind of money you should be able to get about 40TB worth of drives. This should allow you to set up a new ZFS pool, to which you can copy your irreplaceable data. Then once that's safe, do remove the single-drive vdev from the pool, figure out which drives are faulty and get them out of your pool. Now you should be left with enough space to juggle your data and rebuild your original pool. Then use the new drives to implement proper backups of your irreplaceable data. I'd recommend using Borg Backup.
  17. Also, you're going to want to use some sort of secret management for those DB passwords so they're not just sitting there plaintext. Never worked with portainer but according to their docs (https://docs.portainer.io/user/docker/secrets) they're simply using Docker Swarm underlying with Docker Swarm secret management. Given that that's the case, I'll attach my Docker Swarm Nextcloud setup. You'll see it puts all persistent data on a zfspool, as well as it uses a Docker Swarm Secret for the DB password. I don't use env vars for settings for Nextcloud, as that allowed me to go through the nice initial setup page in the beginning and set up the DB schema through the web UI in Nextcloud. Further, you'll see it uses Traefik to expose my stuff to the outside world, as well as handle TLS offloading and certificate management with let's encrypt. Lastly, it uses some Swarm cronjobs to perform regular maintenance tasks. All that said though, if you're just starting out right now, do yourself a favor and do not use Docker Swarm. It's pretty damn obsolete as it's not maintained by Docker anymore. I'd try to standardize on some Kubernetes-based system instead. My stack was set up like this when Docker Swarm was still somewhat relevant years ago but I made the wrong call then and should've jumped on the k8s train. Hindsight 20/20. nextcloud.yml
  18. Those log lines you're getting are very generic Apache Nextcloud log lines. Not indicating problems. The log file you're looking for is a file named nextcloud.log under your data directory, wherever that lives. See also the logging docs: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/logging_configuration.html However, without further info what this looks like to me is that you're addressing your database wrong. You specify: - MARIADB_HOST= xxx.xxx.xxx.xxx:3306 What IP address are you using? You mariadb container will get a random IP address in the `mysql` network when you spin it up. I'm guessing you specified the IP address of the host you're running, which will not be exposing this service on its physical TCP stack. I've never worked with portainer though, so I'm just going off of the docker-swarm knowledge I have seeing as the config you showed seems to be compatible with that. I also don't see how you're spinning up that `mysql` network. Is it a simple `docker network create` with type overlay?
  19. Instead of opening your system to the world, run WireGuard and bring your client into the network. That, or run zero trust like ligistx suggested
  20. Just as an addendum: My server is currently serving a video over plex and moving files in my library, power usage is 150W. Peak wattage is 210W for my configuration. Again, Poweredge R510, 8 HDDs, 2x8 GB DDR3, Intel Xeon E5620, single PSU in use (don't have redundant power), HBA, not RAID controller. 1 SATA SSD. Take maybe average 110W, that means about 1MWh or 1000kWh per year. For me, that's roughly 300-500 euro/year of electricity, depending on wheteher or not we're currently in a pandemic. I'm fortunate enough to have family with a surplus of solar power.
  21. While it's fine and I would advise using an SMB share mounted on all your clients so you can "make available offline" all games you want to play on your computer, I would also urge you to take a look at https://lancache.net/. It's a local steam (and windows update) cache which is designed to server clients on lan parties. It's probably a great solution for your use case as well.
×