Jump to content

LIGISTX

Member
  • Posts

    8,384
  • Joined

  • Last visited

Everything posted by LIGISTX

  1. For MacBooks, I’d just use Mac’s built in Time Machine. It works fantastic. It doesn’t make an image of the machine, but you can set up a new Mac from a Time Machine backup easily, so really “an image” isn’t needed. It also has file history etc all nicely built in. For Windows, as others suggested, you’d need some third party app. Windows does have options similar to Time Machine, but in my experience they don’t work nearly as well.
  2. If it’s still working as intended… I wouldn’t bother “upgrading”. To do this correctly, it will be expensive. To do it “crapily”, it won’t be all that expensive but probably wouldn’t gain you much. If it works, just keep letting it work. If it isn’t working for you… why is that? What is the reason you think it needs to be upgraded?
  3. Proxmox host runs on RAID 1 NVMe SDD’s, as do my VM’s. All VM’s run off the same nvme’s as the Proxmox boot devices. TrueNAS is a guest on Proxmox, with a PCIe HBA passed through so TrueNAS has block level access to all drives, it doesn’t “know” its virtualized… All other VM’s that require data from truenas’s network shares reach out via SMB or NFS. Virtually of course, but a vNIC is just a NIC at the end of the day. I don’t know exactly what you mean by: Truenas, for me, as with most homelabs, is just being used as NAS. I am not booting VM’s from Truenas storage.
  4. The machine never had any performance issues. I provided VM’s either 1 or 2 threads, and the hypervisor was easily able to manage CPU load… I’m running relatively light services, so it isn’t a big deal. The most “load” it ever saw was Truenas doing some virtual network reads or writes (faster then gigabit, but slower then 10 gig), and some Plex transcodes. But that workload never presented an issue for it.
  5. It’ll be more than fine. I ran my homelab on a i3 6100…. And had plenty of VM’s running. My 6100 ran ESXi, with VM’s of truenas, 3x Ubuntu server, windows LTSC, home assistant, and a handful of docker containers under one of the Ubuntu VM’s. All on 28 GB of RAM…. I did eventually upgrade to my current homelab in my signature, but that was more because I needed more PCIe and more RAM, honestly the 6100 CPU was never an issue.
  6. Whats the question exactly? That is exactly what I do... My NAS is a NAS, and services (like plex, but also many other things) use the SMB/NFS network shares as their /data locations. I just don't run the actual services ON the NAS, they run seperately. Plex runs in its own Ubuntu server VM next to my truenas VM for instance.
  7. Yea, it is too bad that some of the old dogs on the truenas forum are extremely set in their ways... they need to take some chill pills for sure. But, its not difficult to use. Truenas was the first thing I learned in the homelab sphere about a decade ago, and its been simple enough. Since I would rather not deal with "services" on my NAS, I can't really speak to how easy or difficult that is on either platform. I virtualize everything so I can leave my NAS as a NAS. IMO, especially with the performance of todays hardware, there isn't much reason not to. Use Proxmox as the host, have a NAS VM working as a NAS, and them have VM's for all the other services/containers/stuff you need. No real reason to try and shoehorn services into your NAS, just spin up dedicated VM's or containers outside of the NAS.
  8. Truenas is very simple... and there are SO many guides on youtube to show you how to set it up. The forums can be a little hit or miss, a few of the folks have been there for to long and just don't want to believe anyone outside of an enterprise level client could possibly use truenas, but also plenty of great folks. They are not going to refuse to help you simply because you don't have ECC RAM... and from experience, the knowledge is immense. Also check out level1techs and lawrencesystems forums, plenty of folks there know about ZFS and truenas as well if you ever run into issues. I still stand by truenas, its an enterprise appliance... its just going to work, plain and simple. Unraid, who knows, its more "for fun". If you are serious about data integrity, its worth learning truenas. It really isn't very difficult.
  9. I am not entirely sold on this. ZFS is ZFS, but if the goal is long term stability, personally I would trust an enterprise appliance which is TrueNAS. Yes. Truenas is just Debian now, but having the backing of iXsystems for the overall stability of the OS… that does mean a lot for stability. ZFS is where the magic is, but there is importance in doing an abundance of unit testing before pushing updates into production, and an enterprise grade appliance is going to do that better then other solutions.
  10. Then you shouldn’t use unraid. Use truenas, or anything that uses ZFS. ZFS is the best option to protect data for the long term. Why not get a used server system? My old i3 6100 used ECC, I bought a cheap HPE server with the i3, server chipset mobo, and RAM, all in brand new was like 250 bucks back in 2016 just for reference. Maybe pick up a cheap used xeon and mobo from eBay? I would run ZFS before I worry about ECC. ZFS will scrub and compare data across all drives to make sure things don’t “rot”. ECC is helpful for this, but not strictly required. ECC is just one more way to validate nothing gets flipped in RAM. ZFS scrubs are the real answer to this problem tho, and if you are exceedingly paranoid, run Z3 instead of Z2, it gives you 50% more drives to compare data against when doing a scrub.
  11. If this is for work and not homelab, the price of a proper mobo shouldn’t be much of a concern. If you really do need that speed, you probably do need to look at enterprise grade mono’s.
  12. As stated above, a good enterprise HBA will last a very long time. Can it fail, yes, anything can. But I’d trust a supermicro HBA to outlast a consumer motherboard… If you want to go the HBA route, look at Dell h310’s on eBay. Make sure to get one flashed to IT mode, and get one that comes with SAS to SATA cables bundled with it. Usually about 30-40 bucks.
  13. What kind of networking are you trying to accomplish here…? What is connected on the other end that you can pull or send data to at connect x5 speeds? Unless my searches are just incorrect, isn’t that a 100gigabit NIC? If you have something on the other end that is even remotely able to serve or ingest data that fast, I don’t really know why your worried about the cost of going thread ripper. Just put it in a x8 slot, a PCIe 3.0 x8 will still easily do 50 gigabit… What exactly is the use case here?
  14. I would also just go with one large array. No reason not to… less loss to redundancy this way. You can EASILY write or read from a Z2 array at over gigabit networking speeds. There is no point is setting up multiple vdevs for this type of workload. Just use all 10 drives in a Z2 array, or if you want an extra amount of redundancy, Z3, but Z2 should be fine.
  15. Run an Ethernet cable…. If you can get an Ethernet cable to the metal shed, you can install an AP inside without issue. Always run wires whenever possible.
  16. According to your picture, you have an eax12… not an 11. https://www.netgear.com/support/product/eax12/#download
  17. I’d look into unraid if your plan is to increase storage over time. It’ll handle everything you mentioned just fine. Processor is really up to you. How much performance do you need in the VM? You can run the NAS itself off a system from 2010 with no issue…. NAS’s don’t need much CPU. So you just need to determine how much power you need for the rest of what you want to do.
  18. No problem at all. I would just confirm on truenas forums that the 9220-8i works fine, or maybe @Electronics Wizardy will know off hand (I don't, I assume it is fine though). Second, you don't need 2 expanders. You can connect the HBA and the expander with a single cable, plug one of the SAS -> SATA cables into the HBA itself (which gives you 4 SATA drives), and then you have 5 open ports on the expander which would give you 20 more SATA devices if you fully populated the expander with SAS -> SATA cables. Third, the HBA comes with 2 SAS -> SATA cables it looks like, so you would only need to buy 3 more, not 6, to get a total of 5 which is 20 total SATA drives. Fourth, you would only need 1 SFF 8087 to SFF 8087 cable in this configuration.
  19. You don’t need to pass NIC’s through like this. Proxmox will create virtual NIC’s for each VM, each gets its own IP on your subnet, it works great. No need to over complicate that part of it. I run almost 15 VM’s on a single 1 gig NIC on my mobo. Sure, if you have lots going on network transfer wise, that could be an issue, but for almost all home setups, this won’t be an issue especially if you set up point to point 10gig from main workstation to TrueNAS. Also, using the virtIO virtual NIC for the VM’s, proxmox actually just moves data from VM to VM internally and won’t even go out over the physical subnet. So you can easily get 10gig between VM’s even if you only have 1 gig physical connected to proxmox. I run dell H310’s. This one is already flashed to IT mode: https://www.ebay.com/itm/155421555013?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=zq9OX4YXQYW&sssrc=4429486&ssuid=B1xTkXm_Qfe&var=&widget_ver=artemis&media=COPY SAS expander I use: https://www.ebay.com/itm/144481548648?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=OwHu05S9RzG&sssrc=4429486&ssuid=B1xTkXm_Qfe&var=&widget_ver=artemis&media=COPY There are a few options, and I would do some research as there may be newer models that would make more sense to buy today. I bought all of this almost 8 years ago at this point, but it works perfect for me so I have not needed to think about this since I bought it all. SAS to SATA cable since you will need an extra to get 12 drives working. They also sell 2 packs so could always do that instead: https://www.ebay.com/itm/153572967108?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=jFynvEpdSpS&sssrc=4429486&ssuid=B1xTkXm_Qfe&var=&widget_ver=artemis&media=COPY To connect HBA to SAS expander: https://www.ebay.com/itm/164322850606?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=CvlQ154ETKC&sssrc=4429486&ssuid=B1xTkXm_Qfe&var=&widget_ver=artemis&media=COPY
  20. Yes, do this. On ebay you can find them already flashed into the required IT mode. This is required for use with truenas since IT mode turns it into a simple HBA and will pass all the drives to truenas for ZFS to manage. SAS expander needs a PCIe lane, but only for power, no data goes through its PCIe slot. I believe my expander can use molex power instead of pcie if you need to get creative... I run proxmox and I pass the HBA through to truenas, this is who you want to do it. You must pass the entire PCIe device through to truenas so ZFS can see the drives directly. You will need to enable IOMMU: https://forum.proxmox.com/threads/where-to-add-intel_iommu-on-and-iommu-pt.131592/post-578296 You could set up a 10gig point to point network between truenas and your main PC. You would need 2 QFSP+ NIC's, and you can get fiber trasnceivers and run fiber between truenas and PC. I do this, and with the homelab in my signature (ZFS 10x4TB drives in Z2) I can read and write at about 5 gigabit (if the data I am trying to read is in ARC, then I obviously get full 10gig reads). Hope this helps - I would say you have some more forum reading and learning to do before you dive to deep :). Definitely don't hesitate to ask more questions, but with the info I provided you should be able to start googling around and finding more info. Homelab and virutalization is the type of thing you really need to put the time into and ask questions when you are unsure - it isn't something any single person can just "tell you exactly how to do" :).
  21. Exactly what others said, but to add a little more detail… Gigabit switches can move data at gigabit on every single port simultaneously. If you had a 2 gigabit internet service, two PC’s would be able to do full gigabit at the same time out to the Internet. But since you “only” (this is still very fast…) have 1 gigabit internet, that will become the bottleneck if multiple PC’s are trying to hit the internet at the same time, but the switch will not be the bottleneck. The router is the device that picks and chooses which devices get priority and does the balancing of speed distribution. Switches are just “y splitters” for the Ethernet cables. It’s more complicated then this in reality, but for a low level understanding, this is how it works in practice.
  22. Do not open up ports for WOL. That is extremely insecure. The way to do this is get another machine inside your network (could be a raspberry pi) to remote connect to either via something like team viewer or SSH, then use that to WOL your desktop. Opening a port to a services that is not meant to be exposed to the internet is going to be a very bad time… just asking to get hacked. Also, yea, you can’t WOL over WiFi.
  23. So switch back to your old router (what was faster) and download it, then switch the routers out again...
  24. It’s not too bad under game load. Iirc I would see it in the high 60’s low 70’s in games. When you are stress testing it to confirm stability tho, I’d see 95c which is hotter then I’d like.
×