Jump to content

MG2R

Retired Staff
  • Posts

    2,793
  • Joined

Everything posted by MG2R

  1. For people finding this through Google: I just ran into the same issue with the exact same GPU. Switching my monitor output from HDMI to DP fixed it immediately.
  2. Coming from a thirty-something dude with planning issues, anytime I get to say "let's game for an hour", I spend easily twenty minutes of the hour re-logging in to the multiple accounts needed for a single game, reading new license terms because changing the deal after purchase is apparently a thing, and updating my local single player game with forced internet connections and updates. Just let me play the damn game already. Thanks for coming to my Ted talk. Sincerely, an annoyed gamer trying to finally get into MSFS2020 after a month or two. Update: somewhere between Steams cloud saves and MS' forced online crap they lost all my settings, custom profiles, etc... I ain't playing today. Thanks modern gaming. Update2: my progress through flight training has been lost as well. Honestly this is horrendous.
  3. If you Google for “chamberlain capxs”, the certified product in your screenshot it takes you to this: https://www.liftmaster.com/smart-video-intercom-s/p/CAPXS the building category is not for buildings but devices that are meant to be installed into buildings, as part of the building itself. In this specific case it’s an ecosystem of video doorbells.
  4. Do yourself a favour and set up wireguard. Dead simple to set up while also being very high performance. It’s built into the Linux kernel and there’s user land implementations on basically any platform.
  5. Is there a reason you specifically want DAS? Because reading this, it sounds like your use case would really benefit from a NAS
  6. I had a huge post typed up on my phone detailing all considerations you should make when looking at drive layout. Ready to press send. Somehow triggered a back action causing me to lose the entire post. That’s half an hour of my life I’ll never get back. Bottom line was: I think I would personally go for a 3 drive RAIDZ1 vdev (assuming ZFS) in a zpool. Gives you the option to expand to a second three drive vdev in the future with the sata ports on your board. @LIGISTX might have good ideas about this as well.
  7. Hard drives: only thing that really matters is that you do not buy shingled drives (SMR technology). They have subpar write performance. case: whatever suits your fancy. Other hardware: your components are overpowered for simple NAS use so no worries there. I’d indeed prefer TrueNAS Scale over Core, mostly because I’m more comfortable with Linux over BSD. The underlying technologies of Scale are more in line with general-purpose server software you might see in a datacenter. For BSD, that’s typically less so. for a simple NAS you can always go Windows as well, assuming you know that already.
  8. I am literally building something like this right now based around an Hardkernel Odroid H3+ SBC It’s got two sata ports on-board as well as an NVMe slot. The NVMe can be populated with a board that splits it out into 5 more SATA ports. This gives you enough ports for 6 hard drives and an SSD for boot and cache. The board has integrated dual 2.5 Gbps NICs so bandwidth should be no issue. Ive tested it doing 3 simultaneous Plex transcodes (1 4K, two 1080p) while the board never broke 10W of power. do realise your drives alone can push to 50W if they’re not power optimised. Might need to use 2,5 Inch drives to stick to the power budget.
  9. While there’s a theoretical possibility of this happening, we’re talking about near-zero loads where the capacitor smoothing the output does not discharge fast enough to make the voltage drop faster than the lowest pwm duty cycle charges it again. in practice, constant-voltage charging circuitry is mostly a solved problem. The output stage typically has an over voltage protection built in. Any cheap power brick will most likely work fine as long as it has the amps needed to power the load.
  10. At their core, power bricks are simple constant-voltage circuits. Even with zero load their voltage should fall around their rated output. Even the really cheap ones these days should be _fine_, regardless of how over specced they are. These things are designed to power pretty much anything you plug into them and that includes devices that power down. Can’t run out of spec at low draw. PS: for power supplies, it’s mA or A, not mAh or Ah. Ampere (A) is a unit of electric current, Amp-hour (Ah) is a unit of capacity. Batteries are rated in Ah as in the amount of Amps they can supply for an amount of hours. 10Ah is 10A for 1 hour or 1A for 10 hours of any other product of those two units which make 10 total. A power supply can supply a certain amount of Amps indefinitely, hence, no “h” in the unit.
  11. If software is not your forte, assuming you already are familiar with windows, it’s entirely possible to use windows as your server OS. You can rip your disks on windows and use Plex to host them. Regarding straight disk copies: https://support.plex.tv/articles/201358273-converting-iso-video-ts-and-other-disk-image-formats/ that’s not supported by Plex and I would assume by any other similar application. solutions are in the linked post if you do want Linux-based systems. The usual suspects are TrueNAS, Unraid, or straight up Linux.
  12. The dust is likely fine. If it just stopped working, one of two thing is likely dead: the power supply, or the router. I’d bet on the power supply first. Try to find a new one with the same voltage and at least the same amperage as the original unit and see if it powers up.
  13. Literally “any” drive if you’re willing to replace them as necessary. As long as they’re not shingled, pretty much any drive would saturate a gigabit connection while streaming. especially if you’re going with Unraid. Just chuck drives in there as you go, adding capacity as needed. one tip: think twice before buying all your drives in one lot. Chances are high you’ll get drives from the same manufacturing batch, increasing your chance of simultaneous failures
  14. Depending on what you define as cheap… Odroid HC4 is kinda perfect for this purpose: https://www.hardkernel.com/shop/odroid-hc4/ put in two 16 TB drives for raid1 reliability and you’re off to the races absolutely sips power. I’ve been running an HC1, which is a single 2.5 inch older brother of the HC4 for years now as my dedicated offsite backup machine.
  15. Regarding transcode horsepower: I recently acquired a couple Odroid H3+ boards. They have 10W Intel N6005 chips in there. I can push those to multiple 4K Plex transcodes using quicksync, even while the GPU was rendering a full Windows UI. even without hardware acceleration, CPU transcoding was possible. you really don’t need much anymore.
  16. Just to set some expectations here: your drives are capable of 3-5 GB/s (gigabyte per second) throughput. That is 24-40 Gb/s (gigabit per second). Networking gear in that order of magnitude will run you at some cost. If you do switch to very high speed networking, consider the protocol limitations of Samba. This is well-documented in a multitude of LTT videos. Getting Samba to work at 20 or 40 Gbps is quite the challenge. What is your actual use case here? Which problem are you solving? If you just want faster networking for the sake of faster networking, I’d get 10 Gbps gear. It’s fast but not too expensive these days. The cheapest option will be a direct link between server and workstation. Keep your current networking as-is for internet access, but also add a new network card in each machine, then directly attach both to eachother. You can configure each machine to have a static IP address on the new network interface, and add a route so they know they can reach the other machine on said interface. The easiest way to do this is to pick a new, small subnet that does not overlap with your current home network. E.g. add 172.20.0.2/30 and 172.20.0.3/30 to the interfaces and they should be able to reach each other through the direct network connection. this avoids having to buy an expensive switch but of course limits you to two machines
  17. If you’re getting absolutely nothing and you’ve verified that 1. you have power from the outlet 2. your psu works 3. everything is plugged in correctly the most likely culprit is a dead motherboard. It sucks but doa boards do happen. you could try building the pc outside of the case. I recently heard a tale about someone that had their mobo shorting out on the case standoffs preventing it from starting. Edit: just re-read your post to see you already did that. RMA the board.
  18. Reworded a bit to make it fit the rest of the instructions and to take into account you can’t edit bios settings when the system is not POSTing. Thanks!
  19. This is because the container’s Dockerfile will have the statement “Workdir /app”, meaning if you perform `ls data`, that’s equivalent to performing `ls /app/data` edit: the system works https://github.com/pawelmalak/flame/blob/master/.docker/Dockerfile#L3
  20. If you’re on a wired gigabit network, you can stream files at 100+ MB/s not sure what files you’re working with or what performance you’re expecting but a typical external hard drive wouldn’t be much faster. I wouldn’t even bother with the external hard drives. It adds too much manual labour. if backups are not 100% automated, I consider them not backups at all. You don’t want to be thinking about your storage, you want to be working on your core business. Set it up properly, get some form of automated alerting in case it fails and then stop worrying. oh: *test* your failure recovery before putting mission critical data on your new hardware. Put some test data on there and yank a drive. Learn how to recover from that failure. Then set up backups and try recovering your data from one. The amount of times I’ve seen businesses in trouble because they never bothered verifying they could work with their backups is astounding.
  21. I’d go for the combo of a NAS and an external backup solution. Reasoning is that you need to both tackle availability and retention. * availability: assuming having access to your data is needed to perform the job you’re doing, running off of individual drives is not done. 1 drive is no drive. You need some form of redundancy in your set up so you can keep working even when a drive fails. * retention: assuming you’re boned if you permanently lose data from one of your projects/clients, having one copy of your data is not done. 1 copy is no copy. You need some form of backup so your have a copy of your data if your working set has a disaster. Get a cloud backup service with zero-acces, client-side encryption. Backblaze was already mentioned. I personally like Borg for my backups for their deduplicated snapshotting. Gives you the ability to easily to point-in-time recovery. Not sure if Backblaze has that now. Borgbase is the biggest provider for Borg cloud space. For availability, get a NAS. Synology is great and easy to use. For a one-man operation it might seem overkill compared to direct attach storage, but the flexibility it provides in moving from computer to computer and having multiple devices with access to the data is unparalleled and one of those “can’t miss it once you tried it”. Something like a DS423+ should handle your current and future data needs if you chuck large enough drives in there. Synology integrates with Backblaze or other vendors as well if you want. This allows you to have automated backups and even cloud sync.
  22. The term is QVL or Qualified Vendor List. A list of components that have been qualified to work with your motherboard
×