Jump to content

limetechjon

Member
  • Posts

    60
  • Joined

  • Last visited

Awards

This user doesn't have any awards

2 Followers

Recent Profile Visitors

749 profile views

limetechjon's Achievements

  1. Hi Justin and thanks for giving unRAID a chance! I think your issue may be that your vdisk is located on the array instead of the cache. This is happening most likely because the vdisk you created for Windows was larger than the amount of free space on your cache device. By default, unRAID creates a share to store virtual disks called "domains" and sets the cache policy to "prefer". The prefer cache mode works quite simply by trying to write all data to the cache by default (and keep it there as persistent storage for accelerated access), but in the event a file cannot be created there (due to insufficient space for example), the file will then be written to an array disk instead. Here's where the performance issue comes into play. In traditional RAID-based solutions, data for individual files is spread across multiple disks in the array as its being written. This allows for accelerated writes and reads because multiple disks are able to provide IO streams at the same time. In addition, parity data is also spread across multiple disks in the array at the same time. This allows you to gain performance using RAID across the board. Some downsides to these traditional types of RAID methods are that too many disk failures will result in 100% data loss and expanding your RAID-group can be difficult and/or expensive. In addition, traditional RAID groups typically require all disks to be of the same size, speed, brand, and protocol in order to operate correctly and when disks fail, it is important to replace them with nearly if not exactly identical replacements. unRAID doesn't manage RAID in that fashion. Instead, we write individual files to individual disks. This means that if you wanted to, you could yank a drive out of your array, plug it into any Linux-capable system and read the data off it directly. This also means that in the event of losing even all but one of your data disks, you could still retrieve all the data that is on that remaining disk. At the risk of sounding like the late Billy Mays, "but wait! there's more!" You also can mix drives of different sizes, speeds, brands, and protocols and when a drive failure occurs, you can replace the failed drive with a different device type (same size or larger) no problem! The reason we are able to do all this while protecting your data is because we trade raw write performance in exchange for flexibility through the use of dedicated parity devices. Dedicated parity means that instead of comingling parity data with user data, we dedicate individual storage devices to storing nothing but parity data. This means whenever you write data to the array, we also need to update the parity devices. This limits overall write performance to the array directly, which is why the cache pool exists as a feature. By creating a cache pool, you can use a smaller volume of devices in a more traditional RAID setting to accelerate write performance to shares. Then at a later time (3:40 AM PST by default), unRAID moves the files from the cache to the array automatically. From your perspective, the files always appear in the same share, but in reality, they start in the cache pool and are later moved to the array. In addition to just accelerating the writing of files temporarily, the cache can also be used for persistent storage of performance-hungry files like virtual disks and applications/metadata. To solve your performance issues, I would suggest increasing the size of your cache dramatically. You should size your cache to be the total size you want available for your vdisk storage + your temporary file caching needs. So if you say, "I write about 20-30GB of new files per day to the array that I want to accelerate and I want enough space in my Windows VM for 150GB of apps and games," you should probably get a 200GB SSD at the minimum for your cache pool.
  2. AMD equivalents to Intel are AMD-V = Intel VT-x and AMD-Vi = Intel VT-dWhile we have some AMD users, just know that virtualization support (mainly IOMMU) on consumer grade AMD chipsets can be spotty at best, and the upstream developers tend to test almost exclusively on Intel. So while you are free to try with AMD, any issues you may encounter may be quickly blamed on that.
  3. Not sure what you mean by "how." They are all USB if that's what you meant.
  4. Ok, that is true but there are three issues with that:1 - didn't want to physically damage the cards we were adding to make them fit 2 - reaching the pcie locks would be neigh impossible with the full length of the cards 3 - 7 GTX titans would require far too much power to supply through a single PSU So yes, fitting a full length / height card in a single slot is possible, but for this project, there were other reasons that wouldn't have worked, as indicated above.
  5. Apparently you haven't seen Spinal Tap:https://youtu.be/KOO5S4vxi0o
  6. ... There are multiple guides under the support -> videos section and Linus does a step by step for setup in the 2 gamers 1 CPU video.
  7. Just to confirm, all the VMs were running in OVMF instances (and with proper split VARS support). The GPUs actually didn't seem to support UEFI natively, however, so I had to configure the VMs first using VNC for graphics, then add the GPUs as a secondary, install GPU drivers, then remove the VNC graphics. While you don't see graphics output while Windows is booting, as soon as the drivers are loaded, the monitor lights up. Pretty much standard procedure.
  8. You should check out the 2 Gamers 1 CPU video, he covers this step by step.
  9. There is a lot of information available about unRAID, the OS used in the build, on our website at http://lime-technology.com. unRAID boots from a USB flash stick where the OS lives, and then all your storage devices are managed by unRAID. You can create a pool of disks for high-performance or an array of disks for high-capacity. Either way, data is protected. The pool can be used to act as a write-cache for the array, giving you faster write performance in real-time, then moving those files to the array at night, when you're not using the system anyway. The pool also can be used to store your virtual disk images for your virtual machines, giving them pretty decent performance, especially if you use SSDs for the pool. The virtualization technology used in unRAID is QEMU/KVM. However, unlike traditional Linux distributions, unRAID includes all the components required to do all of this as soon as you boot up. In addition, it's all presented through an easy-to-use web interface. As someone earlier in the thread mentioned, the virtualization technologies used for this are freely available to download, install, and configure on any platform, but they are not exactly easy-to-use for someone looking to do things like what Linus did in this video. Yup, and unRAID actually automatically applies tweaks to the VM configuration when an NVIDIA GPU is added so ensure no Code 43 errors.
  10. Interesting feedback. Thanks for that. Encryption is another feature on the roadmap for future inclusion, so good to know that's a working setup for you.
  11. There will definitely be follow-up on the AMD reset issues. I am even stating in my behind-the-scenes blog write-up that I wouldn't necessarily recommend this for anyone to recreate on their own until that issue is resolved. I think AMD has a huge opportunity to embrace virtualization for the consumer-space, but we need to get past these silly bugs. You shouldn't have to patch QEMU every time a new AMD chipset comes out. UEFI support should be ubiquitous. And on the CPU side, it'd be nice if we had an equivalent of Intel's ARK site for AMD to clearly compare processor speeds, specs, and features to filter for things like IOMMU support and core count.
  12. Did you ever post over on our forums at lime-technology.com/forum for assistance?
  13. I'm glad to hear you saying that. We are just about to start testing with NVMe and we were worried about using btrfs due to this spec sheet from Intel (scroll to the bottom about File System Recommendations): Curious how performance holds up over time and how you configured TRIM given that they don't want the filesystem to manage discard. Also curious if you just threw btrfs on there on a whim and it's just been working out or if you know something we don't about the safety of using btrfs on NVMe devices.
  14. unRAID allows you to choose a host-managed bridge (we call this a public bridge) or an internal NAT'd bridge. The first bridge gives each VM an IP off the router, but VM to VM and VM to host traffic communicates through the bridge, never needing to actually hit the router. For example, I could have a VM LAN party with a stock setup that utilizes NAT'd bridge. There is no need to manually manage all that. We do not yet support VLAN tagging for those who are wondering, but for most home-user setups, it's not necessary.
×