Jump to content

Jarsky

Member
  • Posts

    3,855
  • Joined

  • Last visited

Everything posted by Jarsky

  1. For indexing, also try an indexing cache like Jackett or Prowlarr. Also for qBitTorrent, personally I find that older libTorrent builds perform better, I've had issues with stalling on libTorrent 2.0. If you're using LinuxServer.io you could try switching the repository image to something like 'linuxserver/qbittorrent:4.5.4-libtorrentv1' ; This is the latest qBitTorrent but with libTorrent 1.2 back end.
  2. 12v is the correct spec, and current is drawn (pulled) not pushed. You should always use a power supply that can provide equal or more current than is required. The device drawing the current determines how much power it requires.
  3. SATADOM used to be popular amongst many manufacturers for a while. But many many servers either run OS's that don't require persistent durable storage (think VMware) so they use flash like USB, or they PXE boot. Keep in mind that SSD's can also fail, so you don't want it embedded. As part of Service Contracts you wouldn't be wanting to replace an entire motherboard because the flash failed. money is most definitely a factor in speccing out server infrastructure. You're thinking on the basis of buying 1 single server I guess. We buy servers by the dozens. We buy blade systems which boot from SAN. Some companies buy 1U servers by the hundreds that are just compute and memory for uses such as WebHosting.
  4. You already got it sorted so know the answer, but for completeness; parity sync doesnt affect file access. Only performance. You can check what files are on the disk by clicking the explorer next to the disk you want to check. You can also open console and list the files via /mnt/diskX....e.g The cache for sure should show empty. You have the mover enabled? The files will sit in the cache until they're moved. You can enable it in settings or if you open the console you can run the cmd 'mover' to invoke the UnRAID mover. Keep in mind if the files are locked, they won't move. You can check whats on the cache in the same way as the array devices. When you add them, UnRAID will automatically build parity for the new drives as part of the preclear process; it does not need to recalculate for the other storage drives, and theres nothing manual you need to do. This one im not sure about, but I believe if youre signed into the MyUnraid it automatically applies the license from your UnRAID.net account. You can check the registration by clicking on the Boot Device Then go to Registration Key Manager But yeah it might very well need a restart to apply it, I cant recall as its been a long time since ive done one.
  5. As already said, http is insecure so your credentials will be available to anyone to 'sniff' on the network you're connected to. Talking about a MITM attack here...if youre connected to an open wifi or a masquerading wifi, and someone is sniffing your traffic they can capture your credentials and/or cookies and gain access to the system. Cookie capture is exactly what happened to LTT with their recent YouTube "hack". There are free DDNS (Dynamic DNS) providers out there, or providers with free tiers; such as https://duckdns.org ; you can then use Certbot to generate free LetsEncrypt certificates. Or as suggested OpenVPN would be another solution, as its E2E encrypted as well. Doing a DDNS is probably easier though. Or you can ofcourse just YOLO it; if you arent accessing it from potentially compromised/dodgy wifi networks or dont have any sensitive data on there.
  6. turns out the tsdns port is 41144, which you can set in your screenshot above _tsdns._tcp.utlr.xyz. 86400 IN SRV 0 5 41144 ts.utlr.xyz. But youve got it working? I just ran a quick test and it connected just fine.
  7. This doesn't make sense. The DDNS Script has absolutely nothing to do with the speed of your DNS resolution. All the script is doing is checking your current IP address, comparing it to the existing DNS record in Cloudflare, and using the API to submit an update request. After the A record has been changed, the DDNS script has absolutely nothing to do with your DNS resolution, let alone the time to resolve. Have you setup your SRV records properly? It looks like there is an issue lokoing at those logs in the screenshot. You should have entries something like this: SRV _ts3._udp.ts.utlr.xyz 0 5 9987 ts.utlr.xyz 3600 SRV _tsdns._tcp.ts.utlr.xyz 0 5 9988 ts.utlr.xyz 3600 Broken down that is: Record Type: SRV Service: _ts3 / _tsdns Protocol: _udp / _tcp Priority: 0 Weight: 5 Port: 9987 / custom port for tsdns Target domain: ts.utlr.xyz TTL: Default is 30minutes...3600s is 1hr So in Cloudflare, creating your SRV's should look like this. Once setup they don't need to be touched. Your Cloudflare DDNS script will keep the A record updated.
  8. ofc heatsinks are reusable..its just a piece of aluminum. You might want to get a replacement thermal pad though, normally 1-1.5mm
  9. Try a different SATA cable, on a different Port. Preferably one that's a different color as they're typically connected through a different controller.
  10. Yes, use /dev/shm, this is what I do. If youre using Plex docker on UnRaid, mount /dev/shm > /temp or /transcode or whatever you want to call it. Then in Plex set the transcoding folder to that /temp or /transcode path.
  11. ECC adds unnecessary complication. UnRAID doesnt use memory caching (unless youre going to use ZFS on UnRAID which is now a feature in the latest 6.12 release...however ZFS has its own restrictions with a budget build). Great case, im also considering upgrading my NAS to from the Define 6 to the 7XL. I hope you already ordered all your extra trays as well IIRC mine came with 4....so I bought a bunch of the dual tray kits off Amazon (the 6 and the 7 series use the same trays)
  12. Personally I use Tdarr, to encode everything to x265 using an old Geforce GTX1070 I have. The card sits around 40W under load and the efficiency means it gets through the transcoding in a fraction of the time as software (CPU) This is Tdarr here: https://home.tdarr.io The cool thing about it is that it can run a distributed model and hand out jobs to nodes You can have the scheduler and worker just on a single node as I do, but if I want if I do a massive import as ive done sometimes, I just join my PC with its RTX3080Ti and another machine with its RTX2080Ti to the pool and let them accept jobs, and I can be done in a fraction of the time. So far its saved me 8.5TB of space in my media NAS
  13. I just read this thread, still dont understand the need for DDR5...unless youre talking like DDR5-6800? DDR5 is available in higher frequency kits than DDR4, but its also double the latency. So performance for up until about 5600Mhz is negligible and in slower kits actually suffers. For example a DDR4 4800Mhz kit could have a latency of 19, while a DDR5 5600Mhz kit could have a latency of 36...close to double, so even with a 800Mhz speed bump, the performance difference is essentially margin of error. There are some low latency DDR5 kits but theyre extremely expensive, so likely no dedicated server providers have machines with this low latency memory Theres a lot of Datacenters that also offer Colo (Colocation) solutions, where you can rack your own server. Big downside is that most dedicated server providers use enterprise gear which run at standard clocks. It narrows down the options considerably when you're looking at Consumer grade gear. But keep in mind as mentioned above by @LIGISTX, these enterprise level gear have 4-8 (or higher) channel memory controllers with Xeons & Epycs, etc... so they reach a high amount of memory throughput despite being slower per module. Also, the 5950x is not a DDR5 CPU, its a DDR4 CPU. And these consumer CPU's are only dual channel.
  14. It'll play perfectly fine for direct streams, 3570K will probably just struggle when it comes to transcoding more than a couple of 1080p streams simultaneously. If you do want to run a server that can handle more transcoding load, you either go down the better CPU (Software) or GPU (Hardware) transcoding path. You dont need high spec of both. GPU is more efficient but the likes of Plex & Emby cost for a license to enable hardware transcode. Jellyfin provides the capability for free. CPU is higher quality but less efficient. Meaning less concurrent streams and increased power draw. As you want to go down the 4K X265 route, if you want to be able to transcode this, ultimately you're best to use GPU and at the very least a GTX960. Though that is getting outdated so my suggestion would be a 10 or 20 series Nvidia card. You could also look at a system upgrade using an Intel CPU that is 8th Gen or newer with Iris/Xe Graphics. QSV (Quick-Sync Video) is fantastic for media servers..even an i3 10100, or an i5 10400 would handle a number of streams using hardware acceleration. I'm running a 4K Transcoding Plex server. For comparison, I have a GTX1070 assigned to it for hardware transcoding where needed. I just have 4 cores of my 32C CPU assigned. The reason it has 4 isnt for the video transcoding, but just to have more threads for doing Intro detection, metadata refreshes, etc...and to handle audio transcoding since I can sometimes have up to 5+ streams running at a time from various users.
  15. I think the unspecified is because its being synced by the VMware Time Service. Make sure you have the 'Periodically syncd' enabled in VMware if youre going to use it this way. As for the other machines, you should have the VMware Time Service disabled, and in Windows you should have them pointing to your DC. Theres more about using W32Time here. For larger implementation you can of course push this config via GPO as well. https://docs.vmware.com/en/VMware-Horizon-DaaS/services/horizondaas.install900/GUID-AEC90E5F-C5B6-447F-B03F-C1060C405E1F.html
  16. I'm assuming he means using a file share (e.g Samba) and playing the video files on his device (e.g Computer) using VLC Player? If so the answer is that it wouldn't be transcoded. It would just be decoded into a direct stream. But yes, the fileserver would be just fine. playing a 10-20Mbps data stream is very little load for the spec.
  17. It's worth pointing out that ESXi requires VMTools on the guest for guest time synchronization. You should also only have one of them enabled. Either VMTools Time Sync or Native time sync (W32Time / NTPD). It depends on your setup as to which way you go, but VMWare's stance is that only one should be enabled. We typically don't synchronize guests, and let guests sync with the local NTP server
  18. His MicroServer is already maxed out with n disks. If he decides to upgrade later to larger disks, then thats as easy as the standard unraid at just replacing disks. Yeah WireGuard is natively included with UnRAID (Since 6.10 iirc)
  19. Personal choice really, especially since UnRAID 6.12 rc is currently available as the latest build. This added support for ZFS in UnRAID, so you can now build ZFS and manage it through the UI.
  20. Your Pi-Hole should be installed with Host Network so it should share the IP address of your TrueNAS host First test your Pi-Hole is actually resolving and returning results. On your PC you can just run an NSLookup against it. e.g nslookup google.com <true.nas.host.ip> If that returns a successful result, then you can switch to the Pi-Hole. On any machines that have Manual configured settings, such as perhaps your server, then change DNS to the IP of your truenas host. For the rest that are automatically configured (DHCP) you dont need to change any settings on the device. On your router (which I assume currently hosts your DHCP), find your DHCP settings and change the DNS Server there to the IP of your Pi-Hole. Restart (or release/renew the connection) any of your automatically configured devices, such as mobile devices to force them to renegotiate and get the new DNS address. I really wouldn't even run Pi-Hole as long as the server is intermittently freezing. Your internet is effectively going to be down for all intents and purposes until you bring it back up, since your DNS will be down. Personally I run 2 Raspberry Pi's with Pihole in a Primary / Secondary DNS Server setup. Theyre both connected via Ethernet, and are sync'd using gravity-sync
  21. Excellent, don't forget to mark it as the solution
  22. A local NTP server is pointless for home use. Just use one of your local ntp.org pools. There are already companies that contribute to the ntp.org project. Such as Universities, Datacentres, and Infrastructure like Cloudflare. We have our own NTP servers because we are an ICT and Managed IT company with tens of thousands of server, network, voice and power equipment. I had to replace one over 10 years because it died. But for sure im not bothering to run one at home.
  23. When you say different WiFi networks, are the same hardware but different SSID's with different network subnets (e.g 192.168.1.xx and 192.168.2.xx) ? As already said, if theyre different networks in a LAN, you need a router between the networks so traffic can be negotiated between them.
  24. The common way for deploying exe's via GPO is to create a logon bat script, and use a batch to run your checks and launch the exe. You can use switches to do a silent install (e.g /s or -s) but you probably want to display something to the user as well, so they don't panic when the screen starts flashing black.
×