Jump to content

Jarsky

Member
  • Posts

    3,855
  • Joined

  • Last visited

Everything posted by Jarsky

  1. Do they have company supplied machines (e.g laptops)? Have you considered it being a locally installed application? It would save you a lot of overhead of the VDI's for all those desktop instances and be a better end user experience. Then host a Windows Server 2019 server for the Database on cloud such as AWS. Less complexity without having to support a remote service. Lower Capex/Opex to the company. Probably a better availability. And you don't have to manage backups..just backup the server which can be handled by the Cloud provider.
  2. Does your ISP allow you to use a 3rd party router? If so you can remove the first "modem" (which is also a router..lets call it the "Gateway"). Then use your router, and plug your switch into that if you have a need for it. As has been said...without a half bridge, youre "NATing" yourself. Essentially you're firewalling the other network off. If you have to use your ISPs gateway; You can try adding the second router to the DMZ on the first router if it has that. See if that helps as a workaround to no bridge support. Having another router wont do anything for performance. It might even out the load across devices connected to that router using QoS, however it will have no effect on the "WAN" (Internet connection) since that's being distributed by the Gateway. You should generally only have one Router (Gateway) in a network. Interconnecting multiple networks requires more features than it seems you have between your 3rd party router and the gateway.
  3. Which specific model are you using? And what is your WiFi router? Do you have a screenshot of the connection from Windows that shows the Link speed? And anything from your router that shows how the client is connected to the WiFi...Protocol (e.g 802.11ac), and SNR / dB. Also anything that shows how your wifi is configured e.g Channel Width
  4. Jarsky

    Pi4 SSH issues

    Little confused what you're asking for, it sounds like you're talking about BeamMP in particular is having issues? So does BeamMP have a client on your PC that uses SSH to connect to the server? Is SSH working normally if you use something like PuTTy / MobaXTerm to connect? If so is there a screenshot or something of the settings / config for the connection?
  5. Ubuntu has 32-bit (i386) images, and I assume its a gigabit system so should be just fine for storage and file transfer. Doing more on it would be a stretch though. Single processor systems have been lacking for a long time for anything modern.
  6. TrueNAS Scale has K8s (Kubernetes) and a virtualization platform built on QEMU/KVM as well. Proxmox has containers in the form of LXCs. An advantage to Kubernetes is that you can manage Docker containers and many other containerised applications in Kubernetes with less *technical knowledge* and Docker is the largest library of containerised applications, and extremely easy to containerise your own. Proxmox is fantastic for a Open Source VMware alternative where you need HA / Clustering, but for a single node for managing storage, containers and a nice VM manager imo TrueNAS Scale is king.I run Proxmox virtualised in my test lab. Id recommend spinning up a virtual instance of each (even if you do that on your desktop with something like Virtualbox, and give each a test drive before you commit to one.
  7. As for network. Id recommend a Level 2 switch with supports LACP / Aggregation so you can aggregate all the nicks together. Firewall wise, it depends on your network gear you have or want to use. There is free software out there like pfSense / OPNsense which you can install on whatever you want. Netgate have a parternship with pfSense, so you can get Netgate Firewall Appliances which run pfSense. There are lots of other vendors out there that roll in advanced firewalls with their "routers" such as Ubiquiti with its UniFi range of Gateways. You could go full on with Ubiquiti with one of their Gateways and Switches, and run a UniFi Controller to configure everything to work together. It really depends on how much you want to put into this, and how technical you want to go. Also consider when setting up applications things like loadbalancing, and what platforms youre going to run everything...e.g native, in VM's, kubernetes, etc... You might want to check out TechnoTim and his YouTube channel which does deep dives into configuring these type of things such as K3S (kubernetes), Rancher and Ansible (automation). Hardware is the easy part....you could literally deep dive down a rabbit hole for a year learning about what you can do with designing and configuring all the software side.
  8. So the ThinkServer SR645 V3 documentation Yes it can support up to 4 x 3.5" drives, but as to which drive sleds it comes with i'm not sure. I would have expected it to option this in the configurator during ordering? As mentioned before, to make the most out of it you would want more than 64GB...at least 128GB. Also the EPYC has 8 memory channels, so you want at least 8 sticks to maximize memory performance. The board appears to have 24 DIMM slots, but generally each bank is tied to a CPU; so you may be only able to use the bank of 12 slots with CPU1 unless you get a second CPU. As for GPU's, with this form factor you'd be limited to single slot cards like the Quadro RTX series etc....I'd be a bit hesitant of putting more than 2 in there max, due to both airflow and power delivery...despite it saying it can support 4. As for software, i'd also recommend getting Overseerr or Ombi to connect to your Sonarr/Radarr. So you can make requests and manage requests from a single central system. Overseerr is my preferred, but Ombi supports Music (Lidarr) as well if thats your thing.
  9. Theyre quite commonly sold on eBay etc...as refurbished cards. Theyre actually real LSI 2008 etc...chipsets, but with a third party board typically based off the 9211-8i design. Theres nothing wrong with them tbh, but its good to be aware of. My question for the OP is why are we running Proxmox? TrueNAS Scale should be able to fulfill all the needs discussed so far? Particularly since its a single host and not a cluster.
  10. Well that's a key point that was left out, using it for iSCSi targets. In that case if its mission critical that you're back up straight away then yeah that might be a good idea. Personally I just have spare parts (I have spare motherboards, CPU's, memory, PSU's, HDD's) in case of a failure. I do backup critical data to cloud, but otherwise I just maintain a single NAS.
  11. If youre working with 3TB disks and looking at 4-5 disks giving a total of 12-15TB, then I wouldnt bother with a 'Backup NAS'. Its just an unnecessary amount of hardware; Id just use an external drive to copy a backup of your NAS too. Most likely less than half of that is actually critical data so could possibly get away with even an 8TB WD MyBook etc...
  12. Does the link light flash on the iDRAC port when you plug the ethernet cable in? Sounds like the ethernet port might have a issue (be dead). As you discovered you can use LOM; but if the ethernet port is dead then if you want it to work thats possibly as far as a motherboard replacement since its integrated on the R720.
  13. It should work just fine. Keep in mind you only really need power for if you're transcoding. If you're playing native (e.g Direct Stream or Direct Play) then it uses very little overhead. The old bulldozer CPU's should be fine, and thats plenty of Ram. Plex on its own isnt Ram intensive. As for the GPU's, keep in mind with Plex that there would be a cost to using Hardware Acceleration as you need Plex Pass. Free alternative would be JellyFin. It used to be that you could only use AMD cards on Windows, but these days there is support for them on both Windows and Linux for transcoding. The RX400 series uses UVD6 which supports HEVC / H.265 and VP9 so it should be a decent card for transcoding. The GTX550Ti is a 2011 card... it wont support H.265 so you're best to either stick with the AMD card, or Software Transcoding (CPU).
  14. so the user www-data doesnt exist within your container. what docker container are you using? If its LinuxServer.io's then you need to use UID 1000's name; you can find that by running this command id -nu 1000 i think the default account is abc off memory; so assuming that your command would be: sudo -u abc php occ config:app:set files max_chunk_size --value
  15. I second @Electronics Wizardy; go with a Single Raid-Z2 VDEV. It will already be fast enough for what you want; and running a single VDEV means that ANY 2 drives can fail as opposed to if you had multiple VDEV's it would be only 1 drive per VDEV. Better data protection at the expense of all your disks will be spun up more often. As for the OS; its your choice which you want to go for. As UnRAID now supports ZFS natively as well. Both UnRAID and TrueNAS use Docker and have similar store setups of pre configured apps; they also both use KVM/QEMU for virtualization. Obviously UnRAID has a licence cost but its still a very good foundation and a good option, while TrueNAS is free.
  16. Make sure after you edited your php.ini; that you restarted php to load the new configuration (as its only loaded at initialization). Additionally, you may need to also update "maxChunkSize" in your nextcloud.cfg if you're still having issues.
  17. Might want to also check if theres a firmware update for your SSD controller as well You can use the ISDCT tool for that https://carll.medium.com/upgrading-the-firmware-of-intel-dc-series-ssds-in-linux-debian-458a704c087a
  18. Yeah weird; DUR/DUW should be across the lifetime of the drive and shouldn't be resetting. It's odd because your HRC/HWC look pretty consistant with the Power On Hours. We can derive the total RW from the HRC/HWC values though if we know the sector size which is probably 512 Usage (TB's) = (HRC/HWC * 512)/10^12 would be our formula So in your case the values would be: 73.88TB(R) = (144368876738 * 512)/10^12 10.90TB(W) = (21241519364 * 512)/10^12
  19. Theres no such thing as Windows 10 Server. You're either running Windows 10, or Windows Server 2016+ If you're using integrated video (which I assume you are hence the VGA); then go to the support site for your motherboard / server; and look up the Video Driver there for your OS. Otherwise; as BillBill pointed out, you can go direct to the OEM and get the OEM driver as well.
  20. I do get this; its a very easy solution to use as long as things are going well; but the thumb drive reliability thing is an issue; and trying to recover from problems can be somewhat frustrating especially as the "Community" support can be difficult to more advanced issues. I'll add though one of the recent updates (in the last year?) is to enable automatic boot drive backup to cloud. So when anything changes on the thumb drive it uploads it (to github?) as a backup which you can restore from; so you dont need manual backups. The issue is stopping it from uploading to cloud every 5 minutes with some of the scripts/apps you might want to run on the drive! Its really hard to go past TrueNAS Scale tbh; the interface is great, its extremely powerful; and plenty of flexibility running industry standards compared to TrueNAS Core (FreeBSD). Not really, this is a massive hurdle. UnRAID vs OpenFiler vs ZFS vs BTRFS vs Storage Spaces (ReFS / NTFS) vs Hardware RAID, etc....is all very different in the way that parity is built and how drives can be added. Some do support gradually increasing additional disks, while others dont. So generally its recommended to build a new array with new disks and migrate data across. Its just a very $$$ option to take with migrations if you need to buy the disks. There are a few; this is called "RAID over FileSystem" which is relative to each disk having their own FileSystem, and then RAID functionality (e.g parity or data pooling) over the top of that to provide redundancy and single storage. Such as UnRAID, FlexRAID (discontinued), SnapRAID, StableBit, Drive Bender, etc.... Keep in mind generally you should NOT be taking drives out of pools and using them like this, thats not how theyre designed to be run. I have my drives spin down as well, but something you can do is to install the Dynamix Directory Caching plugin. This will keep a listing ( i think default is 2 directories deep) of the folder structure on the disk from the root; so disks can spin up while you're opening the share and browsing it. Note this only caches the folder structure and not files in the folders. It should *not* take 90 seconds however...typically should be less than 10 seconds to spin up the drive and have the data accessible. The health of the drives spinning 24/7 is debatable between motor wear from always spinning vs the head wear from constantly parking. Power usage would signicantly increase though. Spinning disks can be upwards of 5w/disk....so 6 disks you could be looking at an additional 30W x 24/7...that would be approx 21.6kWh/month...to keep it simple if that was 6 parked disks vs 6 spinning disks, thats NZ$4-5/month ...youd need to work out what your actual difference would be but if they were parked 50% of the time, thats a $25-30/year difference for me in power to park them. If you have a cache drive, that will be spinning up together with the data drive; particularly if theres any write action. The spin up of the drives will also depend on what type of array structure you have and how files are distributed. I use high-water allocation but i distribute across disks rather than try and keep folders together; so most of my drives spin up when I open folders (like shows) with lots of files. Even at an all time low thats not very good value for $$$. Also you'll need a lot of PCIe lanes; so you might need to upgrade CPU to something higher than the standard consumer platform; like to Threadripper or Epyc. You also need to consider you need at least 3 16x slots (assume youre talking the Hyper M.2). Theyre also designed to be RAID on Card and possibly only present as a single Virtual Disk..not sure....but it doesnt sound like a good idea. Spinning disks are still the way to go for simplicity and cost effectiveness when it comes to storage.
  21. You need to setup a VPN Client. Check out this project for a docker container that can be installed on UnRAID: https://github.com/qdm12/gluetun You can connect other LAN devices to the client as well rather than directly to PIA to have a single 'gateway'. They give instructions on the page for how to connect other docker containers: https://github.com/qdm12/gluetun-wiki/blob/main/setup/connect-a-container-to-gluetun.md
  22. It might be more likely to be coil whine from your PSU which will be dependant on the kind of load being placed on it. I have an old OCZ PSU which was terrible for this; depending what load the GPU or other cards were putting on the rails. Hot glueing the coils helped to suppress it, but ideally better to swap the PSU if this is the case and its getting annoying. There arent really any components on the HBA itself that should create the sound (except perhaps the RAID variant with the Super Capacitor that replaced the traditional cache battery), its all transistors, and resistors, etc.... It could explain why it doesnt make the sound in your main PC. I'd try and test with a PSU swap out if you can.
  23. I'd recommend having a look at Longhorn https://www.rancher.com/products/longhorn Check out TechnoTim's stuff
  24. You can't RAID10 2 drives...you mean Mirrored Raid (RAID1)? A mirror would be fine; I just wouldn't use it for anything more advanced like a stripe or a RAID5 etc... SSDs are very reliable; to have 2 losses in 6 months is pretty insane. I've never had one corrupt. I had an SSD die about 9 years ago which was an old controller which was known for having trouble on the old OCZ drives. Never had an NVMe drive die. Thats really odd to have 2 failures...
  25. Plex you create "Libraries" which will only scan and build the library based on the files in the directory path that you specify for the Library. It will also only add media content (video and audio). Plex requires read permissions to the shares to scrape the files and serve them through the player, but doesn't provide any file share access.
×