Jump to content

MG2R

Retired Staff
  • Posts

    2,793
  • Joined

Everything posted by MG2R

  1. It'll be fine. I'm running ZFS+nextcloud+Plex+linux iso acquisition system+borg backup+docker swarm on an Intel e5620. Idle power with 8 HDDs of the R520 server it is housed in is about 90W, so there's that. The only thing I wouldn't expect with that specific CPU is Plex transcoding. I can barely transcode a single 1080p stream on mine and it has double the cores and quadruple the threads of yours. Simple file serving though will not be a problem. My system basically runs at idle except for plex. Edit to add: if you have it lying around, just set it up and test? At worst, you'll spend an evening of tinkering with computers.
  2. Everyone here suggesting two identical systems… let me counter with: have a single NAS, backing up using Borg Backup to a drive box In your shed. the nas can be either turnkey (E.g. Synology) or home brew (E.g. nextcloud on top of Linux). In your shed place a small, low-power “box with drives”. Run Borg to take nightly backups. this will be the most cost-effective solution and as a bonus will protect against accidental deletion as well as malware given that Borg does deduplicated snapshotting backups.
  3. Given that you’re seemingly unable to even Google the proposed solutions and their benefits/drawbacks, I would steer absolutely clear of rolling your own solution. Honestly. Running your own server also means taking care of your own backups and security. Something you clearly are not ready for given your attitude towards “they’re hacked so they’re bad”. go with a cloud solution with local copies of all files. Some onedrive or Google drive or a Nextcloud provider. all of these have integrations with Windows Explorer which will allow you to “make available offline” your entire drive. This provides resilience to outage. if you MUST have a local solution: listen to the others and go Synology. It’s by far the best turnkey solution for a nas. Set up RAID, as well as an offsite backup. For access, use WireGuard to run a VPN on each client that can roam. if you absolutely are set on rolling your own solution, set up a box with redundant drives, install some kind of RAID-like solution then install Nextcloud. Again, run WireGuard so your system is not directly exposed to the internet. if the previous two paragraphs don’t have enough information to get going and figure it out, realise that you are not ready to host your own business solution and hire someone who does. Either that means Microsoft or Google (cloud storage) or that means a contractor who sets up and maintains this NAS for you. There no shame here. You have a business to run. Why are you wasting time trying to run your own infrastructure? Unless you already know how to you should be outsourcing this and focussing on your core business.
  4. @op it honestly sounds like you have a bunch of hardware you’re trying to find a use for as opposed to having a use case and trying to make your current hardware still perform that duty. imo, don’t. Just sell whatever you have no use for at the moment. Plugging it in for something you don’t need will just waste power and make it depreciate that much faster.
  5. man that is one damn sexy computer. Why would you not want all of that glorious brown-ness on glass display?
  6. The little Noctua that could. It held up nicely to over a year of abuse on my enduro bike. The zip ties holding it in place failed causing it to get a bit too cosy with the exhaust
  7. Sounds to me like the firmware flash on the drive corrupted some data causing windows to break, which in turn messed with UEFI causing boot issues. Then during disassembly you either bumped a power cable loose or hit the sticks of RAM, or both. Windows recovery fixed the data corruption, Cmos clear fixed the UEFI weirdness, hardware reseat fixed the induced bad connection. Only a theory of course. I’ve never heard of an “SSD not liking WSL”. Bits are bits. WSL is nothing but a Linux VM in Hyper-V so unless there is an issue with virtualisation on your machine, I don’t see how that would be related.
  8. wow, super interesting. Thanks for the feedback, I’ll update the post
  9. @Jarsky “Data is secure if your server is breached” this is not true if the system mounts the drive as a filesystem
  10. Is this simple file hosting? Your best bang for the buck is probably going to be either kimsufi.com or soyoustart.com, these are the hobbyist and prosumer subsidiaries of OVH if using a VPS you will still need 500 GB of storage if you want to store 500 GB of data. cloud storage gets expensive extremely fast. EDIT apparently I missed the idea to use Google drive as backing storage. Would not recommend. I guarantee it will be slow with sufficient clients
  11. To be honest, it sounds like you've figured this one out.... Sounds like a mobo failure. I hope you can RMA without issue. Sucks to not be able to help more than that
  12. Hardware-wise: you can often find good deals in old servers on ebay. Watch out for proprietary RAID cards though, they're a bit of a chore to get owrking right. I'd suggest flashing them to simple HBA mode or trying to find a system with an HBA in th efirst place. Going that route does mean giving up on some power efficiency but if the upfront costs are low enough that might just be worth it. I've been runing a Dell R510 for quite a while now. Old system but she runs plex, nextcloud, UT99 game server, and some other media organizing services just fine. For Plex, I'm limited to a single encoding stream though and I can't handle 4K blurays. Mostly because the CPU in my system does not have an iGPU with quicksync. That system with 8 drives runs at 90W continuous, to give you some idea. the 8 drives are a random assortment acquired throughout the years. Nice mix of WD Greens and Seagate ST1000s of varying sizes. Oldest drives have 60k+ power-on hours now. Running a ZFS pool of three vdevs: 1 vdev in raidz1 (RAID5), using 3x4 TB drives, one vdev in mirror with 2x 4TB drives, and one vdev in mirror with 2x2 TB drives. The final drive is a 4TB hot-spare. The whole thing is backed up nightly using Borg Backups to an offsite system
  13. There's many things here that both very specific and yet very fuzzy. Pretty much anything in this list is achievable reasonably easy except for 1. power efficient. Running your own server gets expensive fast. Even ignoring compute requirements, running 10 hard drives will net you at the very least 50W of continuous power. 50 W * 0.001 W/kW * 24 h/day * 365 days/year = 438 kWh/year. At current prices where I live that's about 300 euro per year. And again, that's not taking into account actual processing power, networking, etc... that's just hard drives. 2. Using the NAS as "local storage". While on the surface, it might seem easy to do... a 10 Gbps network connection will net you ~ 1GB/s throughput if the storage can handle it, that will not actually render you proper performance when actually used as hosting for program files for Windows. Windows network drives use the SMB protocol which will not be anywhere near performant enough for random IO. Using Multichannel SMB, maaaaybe. But I'd doubt it. 3. Hosting a mail server. Just don't. Speaking from experience here. I have run my own mail server for years and years before switching over to a hosted solution at protonmail. Even ignoring the fact that residential connections typically do not provide a static IP address, nor allow outgoing traffic on typical mail server ports to counter spammers, the process of setting up and maintaining your own mail server is pure masochism. Keeping your server out of spam lists, making sure big mail providers trust you, keeping everything secure,... really.... don't bother. Especially because mail is ever-increasingly the centerpoint of your online security. Unless you make it your actual profession, do not run the risk of having your online life hijacked by a script kiddy. The other requirements... For plex a simple core-i3 will do fine as long as you expose the iGPU to Plex. Hosting websites is easy enough for pretty much any machine... if the traffic is low. hosting game servers should be similarly easy, provided the games you want to host support dedicated servers. I'm a bit of a Linux nerd so I'd run my own soltion, but if you want an easy time look into something like truenas or freenas.
  14. There's many websites in LMG's portfolio. Off the top of my head: linustechtips.com, linusmediagroup.com, floatplane.com, lttstore.com... Probably missing something too. There's work ongoing on the new labs website. Depending on which site you're talking about, you're talking about different teams. However, all jobs are listed on https://linusmediagroup.com/jobs Remember though, working at LMG is still working. While everything seems fun-and-games on-camera, be ready to work your ass off Good luck. Edit: ltxexpo.com
  15. IF you indeed ran through, you would have verified the GPU is working in another PC. Did you do that? Looking at your noted specs, the PSU should be fine, just fine. Can you be very specific about what is happening with the computer exactly? (do fans spin, do you see any leds, are there beeps, etc....) Which steps of the linked post did you execute exactly, what was the result of each of those steps exactly? When asking online, being specific with your communication is very important, we have zero context other than what you provide.
  16. I know this is not what you want to hear but, please, read before posting. Literally the first sentence in this entire thread is highlighted red and reads: To get help, you should create a new thread. Burying your request for help on page 5 of a general how-to-debug thread will not get you any eyes and at the same time makes it harder for me to figure out if the information in the OP needs fixing (yes, I do try to keep it up-to-date based on helpful tips posted here) When creating your new thread, make sure you've already tried every step in the OP. Chances are it'll indicate your problem already. Good luck!
  17. I know this is not what you want to hear but, please, read before posting. Literally the first sentence in this entire thread is highlighted red and reads: To get help, you should create a new thread. Burying your request for help on page 5 of a general how-to-debug thread will not get you any eyes and at the same time makes it harder for me to figure out if the information in the OP needs fixing (yes, I do try to keep it up-to-date based on helpful tips posted here) When creating your new thread, make sure you've already tried every step in the OP. Chances are it'll indicate your problem already. Good luck!
  18. Interesting. I've never heard of this being a problem. In principle the battery should not be needed to boot the system. Its only job is supposed to be to keep the BIOS settings saved and the on-board RTC ticking so you don't lose time when the system is off. Your system should be able to boot even with no battery installed as far as I know.
  19. For Kubernetes/containers it really doesn't matter that much, though, as you're not running full VMs per use case. There will be the controlplane overhead, but using K3S, that should be minimal I'm planning on replacing a single machine with 16 GiB of RAM with multiple of 16+ gig each, I won't be RAM-constrained, except for the system that going to run the virtualized gaming rig.
  20. I specifically want a cluster to play around with distributed storage, HA VMs, and so on. The idea is to learn on this while also use this to replace my current aging server. I overlooked that I didn't need an APU, thanks for letting me know! Interesting you mention those Asrock Rack barebones. Ever since getting the R510, I've made a point of not looking at pre-built servers or barebones because the non-standard server parts annoy me to hell. Especially the RAID card for the Dell has been problematic for me. However, it seems the Asrock Rack barebones use pretty standard parts throuhgout. That makes them very interesting. It seems like it would be a touch more expensive, but the 1U case has drive sleds accessible at the front rather than internal hardmounts, which I was going to go for. Interesting, I'll take it into consideration. Thank you!
  21. Right, so, I'm trying to piece together a new cluster to replace my old server. The system I'm replacing: Dell R510 16 GB DDR3 1066 Intel Xeon E5620 (4 core, 8 thread, 11 years old, 2.4 GHz) 4 x 4TB + 4 x 2TB hard drives, running ZFS (3x4TB raidz1, 2 mirrors of 2x2TB, one 4TB hot spare) Runs Docker Swarm with Nextcloud Plex (1 concurrent transcode, 1080p is the maximum) + content... management system Gitea Factorio server UT99 server As you can imagine, this system is pretty much at the very edge of its capabilities with that selection of software. I'm looking to replace it with a cluster of at least three machines so I can run HA Kubernetes with Ceph/Rook as distributed storage. Per machine requirements: Rack-mounted, smaller == better, as I may end up with co-location, which is priced per Uj At least redundant 10 Gbps networking 16+ GiB RAM ECC Room for at least 4 drives 500+ GiB NVMe drive, probably 1 TiB One of the machines will be outfitted with a GPU, to run a gaming VM. Maybe I'll put in a second GPU for Plex, if needed. Looking into supporting at least 2 concurrent 4K transcodes. If possible, I'd like to minimize power draw. Electricity in Belgium is quickly becoming stupidly expensive. Currently I'm looking at this: Arock Rack B550D4ID-2L2T AMD Ryzen 5600G 2x 8GiB ECC DDR4-2133 Which would run me about 1300-1500 euro per machine before a GPU. What do you guys think about this? Are there better options for what I'm trying to achieve here?
  22. @jj9987 is correct. Build the app and put it live as cheaply as possible. Unless you have actual reason to believe you’re going to have hordes of people flocking to this site overnight, you should be able to scale up as you go. when building, try to bundle whatever it is you’re doing in Docker containers, using Docker-compose to coordinate spinning your services up. This’ll allow you to run your dev environment on your laptop in the same manner as you’re running it in prod. It also allows you to scale to bigger or multiple machine without having to do much configuration at the infrastructure level. further, putting everything in containers allows you to run your workload in stuff like hosted kubernetes clusters for easy scalability. I’d recommend taking a look at https://www.digitalocean.com. While not the cheapest or most powerful (business feature wise) cloud provider at big scale, for single devs and small startups, they do provide a very intuitive system at very fair pricing. Their performance is great. They used to be one of the first providers with all-SSD servers. I’m personally running a VM there to act as a gateway to other servers.
×