Jump to content

Jarsky

Member
  • Posts

    3,855
  • Joined

  • Last visited

Posts posted by Jarsky

  1. A few things:

     

    • Make sure Unraid is up to date (i.e 6.12.10); there are SMB issues with 6.12.8-6.12.9. 
    • You mention an NVMe drive; make sure your Docker Plex AppData is on here; as this contains thumbnails, posters, intro detection, etc...
    • Preferably since you have 16GB of Ram and this is *just* a Plex Server; in your Plex Docker settings in UnRAID map /dev/shm to /transcode; and in Plex set /transcode as your transcoding path. 
    • If you're a PlexPass subscriber; enable hardware transcoding and offload transcodes to the iGPU; the i5-8400 uses UHD-630 graphics and is an excellent Plex Server CPU choice. 

    If you're still getting stuttering, gather the diagnostic logs from your UnRAID (Tools > Diagnostics) and your Plex (Manage > Troubleshooting)

  2. On 4/23/2024 at 12:30 AM, Scalped Skull said:

    Installs but upon login the entire thing freezes/crashes (u never see the desktop)

     

    - Using VM Workstation Pro 17.5.1 (latest at time of writing)

    - Did an upgrade from 23.10 to 24.04 LTS Beta

    - Also tried a clean install of 24.04 LTS Beta

    - Crashing occurs in both GNOME and Unity.

     

    It appears to delay the crashing when hardware acceleration (1080ti) is turned off.  Crashing still happens.

     

    When it gets to the login screen (GDM) then the screen blacks out and typing freezes.  If you act quick enuf before this happens it logs in but the screen stays black permanently.

     

    Other times when doing a clean install when u login it crashes with a white screen saying "Something went wrong".

     

    Seems VMWare is just horribly incompatible with 24.04 at present.

     

    Should not be doing Beta upgrades unless its a POC server; at which point this should be a bug to Ubuntu to investigate. 

    There's a reason that dist-upgrade is locked into official releases (currently 23.10 LTS). 

    With 24.04 in particular; it uses an experimental Linux Kernel; so there are most likely bugs to iron out still. 

     

    Furthermore; keep in mind that many packages are still being created whenever theres a new release; so upgrading you may lose prerequisites that some of your applications/packages rely on to function. 

     

     Before doing upgrades; also make sure you take a Snapshot of your existing VM's prior to the upgrade incase you need to revert back. 

  3. As said, you'd find it very hard to get a replacement backplane. Norco has been dead a while now; and while it does use a conventional form factor (theyre largely a clone of Supermicro servers) that maybe something 3rd party will fit; most of the 3rd party are 12, 16 or 24 port backplanes not 20 port.

     

    Theyre also quite expensive; so probably outside of the budget for this build. So you're probably best as the others have suggested plugging the drives direct to the HBA's with SATA breakout cables. 

     

    Something like this (12Gbps SAS): https://www.ebay.com/itm/364563172676

     

    More budget friendly options would be to get a 16 port (6Gbps SAS) and connect the rest of the drives to onboard: https://www.ebay.com/itm/38687600505 

    Or get a 8 port (6Gbps SAS) + SAS Expander: https://www.ebay.com/itm/162958581156 + https://www.ebay.com/itm/353986556364

     

     

     

  4. On 4/20/2024 at 2:07 PM, plentycheesed said:

    It looks like it was the transcoding storage, it was on the main array. Mapped it to my SSD cache and now it seems to be working great so far. My array is just using cheapo SMR drives (they were free to me, inherited them). Should have checked on that sooner. Thank you for the advice!

     

    Redirect yout transcodes to /dev/shm (shared memory). It will utilise your Ram as a RamDrive that way; and its non persistant, so you never have to worry about the cache being full; and less writes on your SSD. If you're using Docker then you can just map /dev/shm to /transcode (or whatever your transcode path is).

  5. On 3/27/2024 at 6:48 AM, ChrisLoudon said:

    On second thoughts, have you got any good suggestions for a suitable alternative?

     

    I decided to move some large files around and I've just hit the <20MB/s limit you spoke of and its gonna take ages!

     

    its an old Z77 mobo so no M2 or Nvme support.

     

    Just a good fast 4TB 2.5" SSD that will maintain fast reads and writes regardless of the file size (70ish GB per file)

     

    Sorry mate im away from home at the moment so havent been logging on here. 

    As nice as an NVMe is; if its primarily a Plex Server; unless you have like a 10Gbit fibre connection; A SATA SSD will do just fine. 

    I'd get something like a 1TB Samsung 870 EVO SSD (or similar series from Kingston, Intel, etc..) and use that.

    The mover runs nightly so you shouldnt need a 4TB SSD. Unless youremoving huge volumes of data. 

    If you're wanting to host Docker, VM's, etc...id keep that on a seperate array (UnRAID now does multiple arrays); and keep the cache purely for cache. 

  6. The issue is that while you have a multiple nics, it sounds like you have a single network (e.g 192.168.1.0/24). 

     

    My question is:

    If both machines are on the same network, and both are connected to internet (the switch); then why have a dedicated Internet NIC, and a dedicated UnRAID connection? Is it because you don't have a switch with 2.5gbE ports? 

     

    I guess you could try checking that the Internet NIC is set as the highest metric.

    And create a static route for the IP address of your UnRAID server. When configuring the static route make sure the Interface is the IP address of your 2.5gbE connection.  https://webhostinggeeks.com/howto/how-to-add-persistent-static-routes-in-windows/ and set a high metric on it.

     

     

  7. Are your shares using different permissions?

    Try using the hostname for one, and the ip address for the other. 

     

    e.g

    \\hostname\share1

    \\1.2.3.4\share2

     

    You cannot apply different permissions for different SMB shares on the same hostname\address. 

     

    Something I also do is create an alternative static dns entry as an alias for additional SMB shares that have different permissions. 

    Using alternative hostnames stops it from reusing the existing connection, and to specify different credentials. 

     

    I have a generic share which is Read-Only to anyone (guests)

    I also have a backup share which is Read-Write by authorized credentials only. 

    e.g

    \\MYNAS.local.lan\share

    \\BACKUP.local.lan\backup

     

     

     

     

     

     

  8. 22 hours ago, vogam7 said:

    Jarsky, thank you for your reply. I like the idea of RAID 10, but wouldn't your recommendation of inserting WD in Slots 1,2,3 and Seagate in Slots 4,5,6 be contrary to the goal I'm trying to achieve? Let's say WD happens to produce a bad batch of HDDs that fail within 2 years. With your example, I may end up with an unrecoverable RAID 10 should the two disks in the first two bays fail. Would WD1-SG1*, WD2-SG2*, WD3-SG3*  be a bad idea to minimize the chance of more than one failure per mirror?

    (*or the same drive model from a different batch) 

    sorry I mixed that up, thats the correct order I meant. So yeah stagger them. 

  9. 2 hours ago, vogam7 said:

    Hey LTT NAS enthusiasts. I'm considering purchasing a Synology NAS to set up RAID 10. Is there a way to specify in their software which drives RAID 1 mirrors and which ones RAID 0 stripes? Long story short, I have 6 drives, 3 from WD and 3 from Seagate, same capacity & specs . I want to ensure that mirror drives in a set aren't from the same vendor. If such a thing isn't possible via software, would alternating them while inserting into the bay do the trick? I'm a little worried that Synology might randomly pick drives for mirroring, and I'd end up with 2 drives from the same vendor purchased from the same batch, increasing the likelihood of 2 drive failure and RAID 10 non-recoverability. Apologies if someone has already answered this. I've searched for this info in various forums and the Synology manual but couldn't find an answer.. Thank you

     

     

    When you run the Storage Creation Wizard; you choose which disks you want to add to the pool. 

    You can't however choose the position in which you want them. 

     

    With Synology RAID though DSM uses the Near structure; so it will use adjacent drives. 

    Effectively if you have a 6 bay NAS, you want to put WD in Slots 1,2,3 and Seagate in Slots 4,5,6.

     

    Keep in mind RAID10 will be in pairs. So it would be WD1-WD2 WD3-SG1 SG2-SG3

    It would be a RAID01 that would be WD1-WD2-WD3 SG1-SG2-SG3

     

    RAID10 = You can lose up to 3 disks, but only 1 per mirror

    RAID01 = You can lose between 2-3 disks, depending on which disks fail (this config isn't supported in DSM)

     

    RAID6 and SHR-2 would give you a 2 disk protection against ANY disks in the pool. 

    It does so at the expense of some performance for parity calculations, but its negligible for most cases. 

     

     

  10. If you want ultimate quality, then you want to Remux your blurays. Forget about Handbrake. 

    • Rip the BluRay disc (e.g DVDFab)
    • Use BDInfo to inspect your Playlist files and stream codecs (typically MPEG-4 AVC, DTS-HD MA or DDA Audio, and Subtitles)
    • Open the BluRay in tsMuxer and use the info you found from BDInfo to select the files you need to Demux
    • MKV Merge your .264 (Video), .dts (Audio) and Subs into a MKV container

    Heres a general how to example https://www.dvd-guides.com/guides/blu-ray-rip/256-remux-blu-ray-to-mkv

     

    It's fast to do as you arent encoding, and its lossless from the source. 

    Downside is the filesize is as large as your BluRay. 

     

    As far as Plex, for the best quality Direct Stream and Direct Play are your best quality (no transcoding) if your end device and internet upload can support it. 

    If you do need to transcode then CPU (Software) will give you the highest quality (GPU is more efficient but not as good as CPU). 

  11. 3 minutes ago, m9x3mos said:

    As a side note, only the 40 series support av1 encoding but 20 and up do av1 decoding. 

    Correct, but for the sake of this thread we're only talking about decoding. 

    (I know technically its NVDEC but am just using NVENC as a catch all to refer to the Nvidia NVENC/NVDEC engine as that's the descriptor people are familiar with)

  12.  Stream #0:0: Video: hevc (Main 10), yuv420p10le(tv, bt2020nc/bt2020/smpte2084), 3840x2160 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 1k tbn (default)

     

    So that video is H265 10-bit 4:2:0 which is supported for Decode by the GTX1050Ti. 

    But also keep in mind your audio is DTS-HD MA which will be transcoded by the CPU. 

     

    Have you tried with a less demanding rip? Maybe something thats H265 8-bit with AAC/AC3/Dolby TrueHD/etc...audio?

    Also See this file is a DV (DolbyVision), Have you tried a non DV rip as well? 
     

    Your GPU won't get Pinned with a single stream. Also keep in mind that typically what's being reported for "GPU Usage" is the 3D load, and not the Video Decode/Processing load. Since its UnRAID what you could do while its running, is to open up the shell (its up the top right of the UnRAID UI), and you can run the command:

    watch nvidia-smi

    You should see your transcode there, and power draw, usage, etc.....

     

    I did notice that your path for transcoding is 

    /config/data/transcodes/xxxxxxxxxxxxxxxxxxxxxxxxxxxx.ts

    Where is this mapped to with Docker? Is it to /dev/shm (shared memory) or an SSD (e.g your cache)?

  13. Worth noting that SFTP is shell based FTP, I think what you guys are talking about is FTPS (Secured FTP). 

    I assume this is being run on a Windows machine, so really SFTP isn't a thing unless you're going to be installing OpenSSH for Windows. 

     

    Both SFTP & FTPS encrypt the traffic as well as the credentials, while plain FTP sends the credentials in Plain Text. 

    This is only an issue with MitM attacks. As long as your friend isn't connecting to dodgy WIFI's then it doesn't really provide any extra security. 

     

    The big thing though would be to ensure your security/permissions are setup properly, and using a decent strength password for the user. 

    (Client Certification Authentication would be ideal, but at least a decent strength passphrase would be recommended)

     

     

  14. I would take the approach of "if it aint broke, dont fix it", unless the company are adamant that they want a single volume. 

    If its a mixture of "Working Data" and "Archive Data", then I would do a data shuffle to redesign the volumes to be working projects and archived projects if that makes more sense to them to be easier to find what theyre looking for. 

  15. On 3/10/2024 at 9:26 PM, middle_pickup said:

    What I am more concerned with, yet have never really seen discussed is the quality of the encoders.

    There are a number of threads in this forum where I have explained this, as well as pointed out why to use more modern graphics with codec support. 

    But in summary:

    NVENC is slightly better quality transcodes than QuickSync

    QuickSync is obviously cheaper as you dont need a dedicated GPU

    You want a 6th Gen Intel or newer (Preferably a 10th Gen or newer due to Decode quality improvements) if using QSV, or 10 series or newer if using NVENC.

     

    The newer QSV and NVENC in these generations do look better compared to earlier revisions, but more importantly they also support H265/X265 formats that are becoming more and more common. RTX 20 series and newer also support AV1...though that format as amazing as it is, is still being adopted. 

     

    Decode Matrix's:

    https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new

    https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video#Hardware_decoding_and_encoding

     

     

     

  16. I have one of those cards in storage; they run the LSI 2108 so are equivalant to the 9260-8i, not the 9270-8i (2008 chipset). 

    FYI the 1 in 9271 just denotes that it comes packaged with cables

     

    Anyway, the final firmware version for these cards is the P29 Firmware from 2012; which you can find near the bottom of the list on this page:

    https://www.broadcom.com/support/download-search?pg=Storage+Adapters,+Controllers,+and+ICs&pf=RAID+Controller+Cards&pn=All&pa=&po=&dk=&pl=&l=true

     

    Heres the direct link to the 2108 P29 MegaRaid Firmware: https://docs.broadcom.com/docs/12350879

     

     

  17. What hypervisor are you considering using? 

     

    Each vendor that has its own type of disk, typically has a migration tool. 

    If you're converting to VHD, are you looking to use Windows Server 2022 with Hyper-V? 

     

    If you're looking to something running QEMU / KVM (Proxmox , Ubuntu, etc...); then you want to create a QCOW2 or RAW virtual disk. 

    If you're looking at TrueNAS Scale then you want to P2V it, then write that image to ZVOL. 

     

    There are plenty of tools out there by the likes of Clonezilla, Starwind, etc...that can clone disks to virtual images

    You need to decide what hypervisor you're using, and then convert accordingly. 

    Since you already have the VHD, you can also convert between Virtual Disk formats. 

     

    As you're just creating the VHD, you also need to create the Virtual Machine config and then use your new VHD's as the virtual disks in the config. 

    VMware Converter does this automatically when using VMware; but I assume you wont use them because of the new pricing structure? 

     

    *This assumes Windows for everything*

    I cant remember if the HWID changes as part of the process of P2V, it may trigger for a reactivation.

    Your licensing though for the host will depend on what your existing licence includes. Some CAL's are e.g 5 computers so can't answer this for you. 

    You could of course run the host unlicensed and it will run just fine; you will just get notifications, lose some personalization and not receive optional updates. 

  18. 33 minutes ago, DaveBetTech said:

    Is there a NAS solution that supports the following requirements:

     

    1.      Redundancy: 1 drive parity. Could start off w/five 10 TB drives with 40 TB effective, then add a 40 TB drive which brings effective capacity to 50 TB (40TB is now baseline), then adding a second 40 TB drive changes effective capacity to 90TB. Over the last couple of decades, drives get bigger and it's been a super pain to add storage. I want something flexible and easy. NOTE: Yes, I know, 40 TB isn't available now or in the near future.

    Your most traditional and popular storage solutions do not work like this. 

    You could maybe do this with something like Synology SHR or a Raid-on-Filesystem like SnapRAID. 

     

    But the likes of your UnRAID, RAID, RockStor, StorageSpaces, MDADM, etc...some you can use mixed size drives however your parity disk needs to be your largest disk, and/or the size of your virtual disk will only use the space of your smallest disk. 

     

    Example:
    If you use UnRAID or Rockstor and have 5x10TB Disks in "RAID5" and add a 40TB Disk, that disk must be your parity, then you can move the old 10TB to storage increasing to 50TB usable. If you add 2 x 40TB disks though, one can be parity the other pool; which would increase it to 70TB (if you're discarding 2 drives)

    If you use Linux RAID (MD) or Hardware RAID and have 5x10TB Disks in "RAID5", you must upgrade all 5 disks to 40TB before you can expand the Virtual Disk and the filesystem. 

     

    33 minutes ago, DaveBetTech said:

    2.      Changing Parity: Allows for future capability to change parity from 1 to 2 drives. It's fine if the array goes offline, just as long as no data is lost.

    Changing Parity between 1-2 Disk Parity isnt an issue on a number of solutions out there.

    Linux RAID, UnRAID, RockStor (BTRFS), Hardware RAID (LSI) allow you to do this. 

    Other solutions like Storage Spaces and ZFS require you to destroy the pool and create a new pool configuration. 

     

    33 minutes ago, DaveBetTech said:

    3.      Bit Rot Protection: Need checksums and periodic scrubbing.

    Bitrot is somewhat overhyped and really not as big an issue with more modern controllers/drives; they do all their own CRC etc...

    But if you feel this is needed, then ReFS w Storage Spaces and ZFS are your main 2 options.

    Both of these solutions have their drawbacks as above.

     

    33 minutes ago, DaveBetTech said:

    4.      PLEX APP SUPPORT: Plex works on this NAS (could be out of the box or just an OS dedicated to NAS)

    5.      PLEX GPU SUPPORT: Would want to add a GPU and the OS and PLEX needs to support. Will stick to probably an Nvidia 3060 or 4060, or whatever is a reasonable price.

    If you must have an Nvidia GPU then this rules out Synology (though they do have SoC's that can do hardware transcoding in the higher up tier of NAS units). 

     

     

     

    Ultimately the conclusion that you came to is correct; there isnt a single solution that can do ALL of the above. 

    You would be best to pick what is most important to you out of your main points and make a compromise. 

     

    If the ability to expand one disk at a time is the most important to you and to add additional parity later, then I would pick UnRAID.

    You can increase 1 at a time (starting with the parity), and easily run the Plex App in docker and passthrough the Nvidia GPU (UnRAID comes with the Nvidia Driver). 

     

    If bitrot protection is more important, then I would pick TrueNAS Scale. 

    But keep in mind you can only expand a VDEV once all disks have been replaced. And/or you must built an equal VDEV to the existing one to add it to the pool (e.g you have a 5 disk RAIDZ1; then your new VDEV must be a 5 disk RAIDZ1). 

  19. You mention vSphere. The Hypervisor (ESXi) is no longer free since recent changes as a result of the Broadcom aquisition of VMware. Its now a yearly subscription per host, and is prohibitively expensive.  I believe VMUG Advantage is still available currently at a $200 subscription price (per year), but unsure how the ESXi licensing factors into this still....they also may not be happy if they find out its being used in production as VMUG is primarily targeted for those trying to gain certifications in VMware. 

     

    You might be better with Proxmox if you want to do clustering between multiple servers (balance and/or move for maintenance), this also comes with ZFS support.

    If you just need a Hypervisor and running them seperately, also consider Windows Server 2022 with Hyper-V role and Storage Spaces. Both do clustering but Proxmox is just *easier* and is closer to how VMware works.

  20. On 2/27/2024 at 9:14 AM, Shimejii said:

    Just overall seems like a Horrible idea to have a 3 in one PC for this type of thing.

     

    You want your personal photo and files on a seperate thing, you dont want them mixed with a game server that you want people connecting to, as well as films and shows.

     

    Not true, it's just fine when you're running a containerised or virtual environment. 

    He can run all of these and Docker, and just use the server for 'compute' and create different data shares (multiple SMB targets) or even virtual disks to separate the roles. 

     

    I would create a general data pool, and a game server pool. I'd run some flavor of Linux (or something like TrueNAS Core etc..) and run these via Docker. 

    Can go as far as to create separate Docker networks to isolate the groups of containers from being able to 'see' each other. 

     

    Proxmox is also great, but uses LXC's. Id personally go TrueNAS Scale because of the extremely wide Docker support and by using Docker you reduce overhead compared to running virtual machines. Proxmox is better as an OpenSource alternative to VMware (especially with pricing changes). 

     

    Personally at home, I have my Testlab, "Media server", Cloud (Nextcloud & Backup), Reverse Proxy & other functions, and Virtual Machine game servers all running off a single high power server. With a mix of arrays, virtual machines, containers, and VLAN's / Docker Networks. 

  21. They will only be able to apply this to new licenses surely. As existing licences were sold in that it was a one off charge and new releases would be available at no additional cost. It's still in the terms today for buying a license. 

     

    https://unraid.net/pricing

    Do I have to pay for new releases of Unraid?
    No! All license tiers are eligible to run new releases of Unraid OS at no additional cost.

     

  22. What i'm saying though, is there a need to why this application is being installed on servers rather than the users have it on their laptop? Since it seems every user has their own instance. 

     

    And im saying if you only had the database server and used Cloud then you wouldn't need any physical server on premise. 

×