Jump to content

LIGISTX

Member
  • Posts

    8,403
  • Joined

  • Last visited

Awards

Profile Information

  • Gender
    Not Telling
  • Location
    California
  • Member title
    Junior Member

Recent Profile Visitors

4,993 profile views
  1. They shared a little bit of info. But not much.... It will be available when its available. Look at the youtube vid from this weeks WAN show posted in the above comment. It has what little info there is.
  2. This 100%. Honestly, its more difficult if they are VM's because you add an entire extra layer of potential issue's and incompatability. Seriously, just buy a bunch of xbox's and PS5's, and a few decent computers. You can build top line gaming PC's for 1500 bucks easily, especially if they are really only gaming machines... 7800x3d with 16 GB of RAM and a 4070ti is a phenominal gaming experience, espeically if you run 1440p high refresh monitors, and especially for kids... but I still think the console idea is the best way to go if you want ease of use for yourself. Consoles only have 2 modes of operation, working, or entirely broken.
  3. Can you just use thin clients as the devices and present them something like KASM workspaces? Just have each tin client only able to access the webpage of the specific KASM workspace you want for each display, and theoretically it gets you what you want.... just with a bunch of thin clients acting as the display and audio outs.
  4. Sounds like you need to switch your kids to consoles… seriously tho. PC’s shouldn’t take much work and attention just to play games… sure things go sideways cuz windows is dumb, but it isn’t what often. And settings optimization is meh, if your kids want to figure that out themselves let them, but I wouldn’t spend the calories yourself. Otherwise… just buy many PS5’s and Xbox’s. They just work, every time, no question. I think we all need to have a serious reality check. This isn’t slumming it…. Slumming it would be using hardware from 2014……. You can build a balanced machine with a 4090 with 3 grand. So don’t? No one is forcing you to build balls to the walls PC’s to just sit around. You are putting an unrealistic goal in front of yourself, and trying to solve it in a very difficult way. Virtualizing this environment is probably not the answer, it has so many headaches of its own and pitfalls that are not well documented because it’s known to be a poor idea. The only reason this is being justified is because your contrived station of “I need to be able to build many, extremely high end gaming machines, which most of the time will sit idle, but I don’t actually want to pay for them all”….. buy 10 Xbox’s and 10 PS5’s and you’re done. Seriously tho, I’m just being honest here. I can understand not wanting to manage all the machines, but I don’t think this solution is as effective as you want it to be. Now you will have the added fun and excitement of managing all the PC’s plus the virtual infrastructure they live on, with the added headaches of if any single piece of hardware needs R&R, all machines are taken down while they is performed. I maintain PC’s for my family and family business, and I don’t have anywhere near this many headaches lol. I have to mess with hardware maybe once a year tops, and remote into a machine maybe once every 3-4 months to fix an issue. Again, I think you are creating a difficult situation for yourself with unrealistic requirements. You don’t need to do this: Just upgrade one of the PC’s, you don’t need to cycle every GPU down the line…. Again, hardware maintenance shouldn’t be difficult. Parts don’t fail often, and you don’t need to upgrade EVEY machine every time a new shiny part comes out. Upgrade your machine since your clearly into PC’s, don’t make this a requirement across the entire deployment. Anyways, just my .02.
  5. Nope. I’m sure you will hear about it once it’s released…
  6. If your ISP doesn’t use DHCP and uses something like… PPOE is it? You have to manually configure your router in this case. But, I agree, I have never had to do this either. Sounds like this to me as well, and I can’t think of a reason why anyone would need this. Also, yes, no firewall = you’re going to have a bad time. Windows should never be exposed directly to the internet. OP, what problem/issue are you trying to solve here?
  7. Unless you have some pretty wild internet speeds, you don’t need l2arc to cache Plex media to seed…. You don’t need need to cache it to play back content. I actually have ARC fully disabled for my /media directory in TrueNAS…. No point in wasting RAM trying to cache huge, sequential, rarely accessed files. And even when they are accessed, they never need to be accessed at over gigabit speeds... my array, purely from the array without data in ARC, can still read at over gigabit speeds… remember, gigabit is only 125MB/s. A single harddrive can saturate a gigabit link, none the less a ZFS array of multiple drives especially for large sequential reads. Sure, if you start hammering the array with lots of IOPS things will start to slow down and cause issues, but rarely does a home NAS see use cases such as this. Have your /media directory mounted as NFS to your qbitorrent VM/container. Let it write and read directly to the TrueNAS directory, no reason to have it FIFO storage and shuffle stuff around. That way you can have things forever seeding, just enforce some arbitrary upload bandwidth limit that makes sense for your internet connection.
  8. If you are running bare metal, no real reason for an HBA… assuming the mobo has enough SATA ports for your needs. If you wanted to virtualize truenas, that’s when an HBA becomes valuable… I run truenas under proxmox and pass the HBA to truenas so it has bare metal access to the drives. For networking, that really depends on what your network looks like and how you want to do this. I run a connectX 3 in my homelab, it’s also passed through directly to truenas. I then run an Intel 10gig card in my PC (I forget the specific card off hand, it’s not melonox since the connectX 3 I previously used on win 10 doesn’t work with win 11…) and run a point to put fiber subnet. The rest of my network is standard gigabit, but that’s fine. I get the speed between my PC and truenas, and since Truenas is virtual under procmox, all my homelab services have virtual 10gig+ between VM’s anyways…
  9. I have used both, and I find wireguard a lot easier (and more performant, its way lighter weight than OVPN). But either works great. I agree, whichever actually works for you is the right one to use. I am just curious why WG wouldn't be working. Ports are opened in your router?
  10. WG client cert? Wireguard doesn't "use certs"... what do you mean? Wireguard uses keys. This is a random sample config from the interwebz: [Interface] Address = 192.168.2.1 PrivateKey = <server's privatekey> ListenPort = 51820 [Peer] PublicKey = <client's publickey> AllowedIPs = 192.168.2.2/32
  11. I think we may have a different meaning of the word optimize. I am an engineer, and I run my life via optimization... I could easily throw some optane SLOG's and very fast L2ARC into my truenas box, but I know that my 10gig networking is not fast enough, nor is my workload one that would need this... I run 10x4TB drives in Z2, and I can write to the array at over 5 gb/s, and read at about the same, unless the data I need is in ARC which then obviously it maxes out the 10 gig link. My setup is optimized for my needs, throwing hardware at it doesn't make it more optimized, it makes its less optimized. The only thing I have concindered doing is adding some optane for special metadata... that would make a very noticable differnece in how fast the NAS feels since it would be able to pull metadata much quicker, and it reducdes the number of reads to pull data off the drives in a big way. All this said... before you go to L2ARC, you need to maximize actual L1ARC. How much RAM do you intend to put in this system? I am assuming 32 GB? L2ARC eats up actual ARC just to index what is in L2, so until you have 100's of GB's of RAM, which 1) means you likely have a need for more ARC, 2) have some RAM to spare for L2 indexing, sure, throw some L2 in it. But something else to remember, for almost all home workloads, L2 ARC will never actually be used anyways - once again, not optimizing... I am all for learning, thats the entire point of homelab. But imo, a big part of that learning is learning what matters and what doesn't matter, and building systems to a spec that makes sense. Saying you want to add L2ARC and SLOG devices to a home NAS is a lot like going into a "new build thread" post linking to the most expensive components on a newegg page without actually knowing why you picked them. Obviously, trying to build out a capable sysetm as a business expense/self learning expendature to better your IT skills is worthwhile and valuable. So, if that is truely the case, fair enough. Just my .02...
  12. Why? What is your use case? You almost certainly don't need L2 ARC, and still mostly certainly don't need a SLOG device. What is your networking speed, this is probably the most important question...
  13. Wireguard is painfully easy, way, way easier then OVPN. You literally just instal wiregaurd, and you create a config file on your client. I can't speak to running any VPN in a container, I assume docker networking becomes a bit of a headache. Just run the VPN on bare metal on the host?
  14. Is the PC you connect to the NAS with not also hard wired..? If not, going to beyond gigabit is not going to matter anyways. What is the use case exactly? What NAS do you have?
×