Jump to content

michrech

Member
  • Posts

    146
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Gender
    Not Telling

System

  • CPU
    Xeon X5677 x2
  • Motherboard
    HP Z800 mainboard
  • RAM
    32GB (8x4GB)
  • GPU
    Radeon R9 270x, Radeon HD 7470
  • Case
    HP Z800
  • Storage
    3 240GB SSD, 6 1TB SATA (StableBit Drivepool)
  • PSU
    1110w HP Z800
  • Display(s)
    Dell 24"
  • Cooling
    Air (default Z800 setup)
  • Keyboard
    Logitech K350
  • Mouse
    Logitech (N/A)
  • Sound
    HDMI out via R9 270x
  • Operating System
    ESXi host, Windows 10 primary VM

Recent Profile Visitors

829 profile views
  1. Your CPU supports VT-d and VT-x, so it'll be fine, so long as your mainboard also supports VT-d. That CPU is only 4 core / 4 thread, so you wouldn't have as many resources available to give to each VM as someone with a 4 core CPU with hyper-threading. Under unRAID, you'd be able to make use of your 980 and 970, but not both in a single VM -- they'd need to be assigned to separate VMs. As to your drives, you will certainly need to wipe them *and* re-install your guest OS (in your case, Windows 10).
  2. I don't believe your power supply would handle both of those cards at the same time. In addition, installing either of those cards into an X1 slot will severely hamper its performance.
  3. For an AMD system, you need to make sure the mainboard and processor both support IOMMU (the label AMD uses in place of "VT-D"). If your motherboard doesn't have enough PCIe slots, you can always replace it with one that does (keeping your processor / RAM).
  4. I don't know why you couldn't add a 7850 to the system later on. Having nVidia cards working in any hypervisor is relatively new, so I don't know that anyone has even tried. As to your second question, no, there are no workarounds for that -- the CPU needs to support VT-d (or IOMMU for an AMD part) for any of this to work.
  5. According to a link I found on their web page, they're using either Xen or KVM. http://lime-technology.com/unraid-6-virtualization-update/
  6. I doubt it's the hypervisor doing that over a service (dhcpd, something else?) doing so...
  7. It was my understand that using graphics cards for Bitcoin has been worthless for quite a few years now, at least once the dedicated (and fairly expensive) custom ASICS came onto the scene...
  8. The hypervisor unRAID uses provides a "network bridge" that allows the VM's to share one NIC. In a case where you didn't have 7 graphics cards, you could add more network interfaces and directly assign them to each VM, if you wanted / needed.
  9. The hypervisor takes care of that task. The TL:DR version is -- the CPU cores aren't "dedicated" to each VM, and unusable anywhere else...
  10. I completely disagree with pretty well everything you just typed (as far as unRAID being "easier" to use than ESXi). Yes, unRAID has the ability to use nVidia cards, via a workaround (that was admitted to be 'spotty' in the 2 gamers 1 tower video), but "easier" to configure otherwise, I completely disagree.
  11. I thought consumer grade hardware was implied since this topic is about that very subject, so far as the graphics cards were concerned...
  12. I'm not quite sure what you're asking here, however, there is no requirement that the GPUs match. In my ESXi host I'm running a Radeon HD 6970 and an HD 7470, each attached to a different VM.
  13. From what I've read, the issue is nVidia actively blocking their driver from starting if it detects it's in a virtual machine, not an issue with ESXi itself...
  14. Say what now? I have two graphics cards (though they are different models) in mine, passed through to VM's, without issue. Another VM has two PCIe cards (not graphics) *plus* the onboard ICH8R 6 port SATA controller passed through to it...
  15. There isn't any reason some of the parts, like the hot-swap bay or the drives, couldn't be swapped out for cheaper / different manufacturer parts. Linus used what he did because sponsors sent those parts to him. As to the amount of storage -- what he'll end up with will be closer to 30TB, because of the way his storage array was configured. That said, the amount of storage "needed" is dictated by what you plan to do with it. I currently have around 6TB, which holds the movies / TV shows I've decided to keep, and leaves me with a couple TB free, in case I ever expand (or use my storage pool for other purposes). I generally don't keep TV show episodes after I've watched them, save for a few shows that I find I can re-watch, which allows me to keep my storage footprint fairly modest (compared to what some of these guys build).
×