Jump to content

2 Gamers, 1 CPU - Virtualized Gaming Build Log

Anyone else get 2/3 through this vid, pause and say out loud "this is way too complicated"

DISPLAYS: LG 27UL500 IPS 4k60hz + HDR and LG 27GL650F IPS 1080p 144hz + HDR

 

LAPTOP: Lenovo Legion 5 CPU: AMD Ryzen 7 5800H GPU: RTX 3070 8GB RAM: 16GB 3200MHz (2x8GB DDR4) STORAGE: 1TB Crucial P5 NVMe SSD + 2TB Samsung 970 evo plus NVMe SSD DISPLAY: 1080p 165hz IPS OS: Windows 10 Pro x64

Link to comment
Share on other sites

Link to post
Share on other sites

Why would we pay $60 -$130 (depending on the features we need) for unRAID when we can use Citrix XenServer or VMware vSphere (ESXi) for *free*, and have all the same capabilities?

 

 

We actually have built-in support for OpenELEC as another VM in unRAID.  It's even easier to setup than Windows 10, because you don't need to download anything in advance.  You select the OpenELEC template option in the webGui for unRAID and a download button appears that will download the ~ 292MB virtual image you need for it.  Assign some CPUs and a GPU (and any input devices if you desire) and fire it up!  You could also install SteamOS on there, although we don't have a guide up for the specifics on doing that yet.

 

The idea of unRAID is to have a NAS, virtualization host, and application server inside the same rig, then allow users to partition their system resources accordingly.  With high-end hardware as capable as what Linus showed, you have more horsepower than room to gallop, so let's find a way to make use of those ponies!

Link to comment
Share on other sites

Link to post
Share on other sites

I would challenge anyone to try and use VirtualBox, VMWare, or Hyper-V to try and recreate what Linus did in this video using the same hardware.  We have found that other solutions are far more complicated to set up and don't work with nearly as many different GPUs / motherboards as ours does.

 

Challenge accepted!  I'm not going to go out and buy all new hardware -- I'm just going to use the several-generations older hardware I have...

 

My rig:

HP Z800 with 2 Xeon X5677's @ 3.457GHz running ESXi 6.0.0-2809209

32GB DDR3

Primary Data Store : (3) 240GB SSD (using RAID0 from onboard LSILogic HBA)

 

The first VM - Win10 "Gaming" system (Champions Online and Star Trek Online - I'm not into FPSes and the like):

160GB vHDD

6GB RAM

4 vCPU

Passed through devices : one pair USB ports, Radeon HD 6970

I use a Cables2Go 42419, 42422, and 42405 for the HDMI cable, and a standard USB over Cat5 extension for keyboard / mouse

 

 

The Second VM - WHS2011 (NAS, Plex Media Server, Sonarr, Sabzbnd):

170GB vHDD

6GB RAM

4 vCPU

Passed through devices : PCIe USB3 controller, PCIe ASMedia eSATA controller, onboard 6 port SATA controller

On the eSATA controller, I have a port multiplier with 5 1TB HDD's connected.  This is used for my "NAS", and was created with Stablebit's DrivePool software.  I also have Stablebit's DriveScanner installed to warn me of any impending drive failures.

This system is controlled via Chrome Remote Desktop

 

The third VM - Windows 8 "tabletop RPG map display" system:

50GB vHDD

2GB RAM

2 vCPU

Passed through devices : Radeon HD 7470

This system is controlled via Chrome Remote Desktop, so it doesn't need keyboard/mouse abilities.  I am using a Cables2Go RapidRun 50' analog runner with a VGA+audio wall plate and VGA+audio flying lead for the projector.

 

___

Other than the Windows OS and the Stablebit software licenses, the rest of the software was free.  I think it's a much better idea to have a bare-metal hypervisor handling things instead of having an unRAID linux system try to handle these duties, PLUS, ESXi doesn't cost anything for a home user (unlike unRAID).  Yes, you need a Windows system (or VM) to run the vSphere client, but I don't think that's going to be a problem for most people that'd be doing this.  The OSes were easy to install, and it's a bonus that I don't have to install a graphics card for each VM (as Linus seems to indicate is needed for the unRAID setup).

Link to comment
Share on other sites

Link to post
Share on other sites

I've been running two games on my rig for a while now using a program called SoftXpand. It's ~$50, and is fairly easy to set up. Running a Phenom II x6 @ 3.6ghz, 16gb ram, and a single 7950 Boost. Some games work, some games don't, anything that works is playable on dual 1080p. Mostly it's mouse issues in FPS games, but I'm able to run two copies of Guild Wars 2, which was the original goal. I haven't watched the entire video yet, but this is my setup so my girlfriend and I can play together, since we're strapped on money and space. It uses the same copy of windows, so we don't need more than one windows install. It's a RDP setup, so it's not any taxing on anything really. Very little setup to get it functional. Edit: Your method has much longer setup, but seems more maintenance free. Little more set-and-forget than SoftXpand, but it's also more expensive and requires a second (or third, really) gpu. If I ever need to actually build a rig for dual-setup again, I'll probably go this method, but until then SoftXpand seems to cover easy mode setup better.

 

_____
Stolen from my youtube comment.

The idea is nice, but this method is terribly expensive.  Not much point besides workstations to do this expensive method.  The setup requires a bit of work, multiple GPUs and a burner GPU, as well as owning 2 OS copies, and similarly priced software (unraid vs softxpand)

 

I think you can get past the 2 OS copies since it's the same motherboard, they should register.  But three GPUs and slightly more expensive software pushes the price up a fair bit.  HOWEVER, as said above, this does seem to work a lot better than softxpand.  Virtualization rather than RDP means ALL games work with little compatability issue, which is my only real complaint.  Not having to run sandboxie or similar systems for things and no conflicts with things like Punkbuster are well worth the extra money.  Plus, you get a full GPUs power without having to share.

Link to comment
Share on other sites

Link to post
Share on other sites

[removed]

Well nicely done, but not really a true accept on my challenge (didn't use the same hardware).  I'd also like to know how performance was for the GPUs compared to bare metal.  Still a nice setup nonetheless!

Edited by Godlygamer23
Link to comment
Share on other sites

Link to post
Share on other sites

I like how the devs are in here defending their product.  Its pretty cool.  I signed up to primarily just say that.

 

Unfortunately, this comes MONTHS late for me in which I could test this out.  I built my boys two brand new machines because I was getting annoyed with them pestering me about playing multi-player minecraft and other games, and the games they want to play wouldn't run effectively on the hardware I had laying around.  So, dropped the coin on a couple of cheaper (not cheapest) ASUS boards and a couple i5s with 8gig DDR3 each.  I used extra 500gig drives laying around and they're happy.

 

If I knew this existed, I'd still have to hum and haw to find out if it was worth the one-time-fee of the $100+ for software, then rely on one machine.  Then there is the case in which if the board dies, then I'm looking at two miserable kids.  With two machines, if one machine dies, they have to go back to sharing a single computer, which is a much lesser evil.

Link to comment
Share on other sites

Link to post
Share on other sites

I think using *older* hardware, for the same benefit / features, is acceptable in this case...

As to the gaming performance, I can only speak to STO (I don't play other games), using it's internal "FPS" display, and relay that the FPS numbers are identical to when I ran it on my i7-2600k under Windows 7 (the video card was moved from that system to the Z800).  There is also no perceptible 'input lag' (I even had a friend that's a 'snob' regarding these things, and he couldn't see any 'input lag' either).

 

One thing I'd like to do is add another decently powerful graphics card to the system and see how it does with two VM's going.  I imagine it'd be fine, considering I have two Xeons in the system, and my power supply is 1100w, so providing power to another card should be easy as pie...

 

I built this system with inspiration from thehomeserverblog.com (which has unfortunately not been updated in over a year).  The nice thing with the Z800 is that most of its USB ports appear separate to ESXi, so they can be split off for the various VM's, if one doesn't need more than USB1/2 speeds...

As to the challenge being done on identical hardware -- if all of this works on two older Xeon's, there is little reason something similar shouldn't be able to work on what Linus put together, save for the nVidia cards, as those have a poor track record of working with pcipassthrough in ESXi, though I hear they work in XenServer.  I've pretty well always stuck to AMD cards (save for a brief stint with a GTX 570)...

 

 

Well nicely done, but not really a true accept on my challenge (didn't use the same hardware).  I'd also like to know how performance was for the GPUs compared to bare metal.  Still a nice setup nonetheless!

Link to comment
Share on other sites

Link to post
Share on other sites

Now this is the kind of stuff I absolutely love and look forward to seeing!

It's especially refreshing after that rather disappointing even if amusing pfsense build, where basically nothing happened aside from Linus killing 3 motherboards and a CPU then that's about it done... (pretty much)

 

My only question is why did you go with unRAID? Not that it isn't worth it if you actually plan on doing something like this and it's certainly easier to setup, but why not something free such as pci passthrough? Either way this is still one of my favorite videos from Linus.

Link to comment
Share on other sites

Link to post
Share on other sites

@LinusTech is it possible to use one of the virtual machines for video editing and video rendering while recording and playing from the another one? how to connect that to be smooth (access to the same drive?) and with use of only one keyboard and mouse over both of them?

Edited by Godlygamer23
OS: Windows 10 Pro x64 / CASE: Corsair Graphite 780T / MOBO: MSI X99S GAMING 7 / PSU: Corsair RM850, 80 PLUS Gold / CPU: Intel Core i7-5960X @3.5 GHz / RAM: G.Skill Ripjaws4 32GB DDR4 2133 Mhz / GPU: 2x MSI GeForce GTX Titan X 12GB / SSD: Kingston SSDNow V300 120 GB / SSD: Kingston SSDNow V300 240 GB / SSD: Kingston SSDNow V300 480 GB / HDD: ST1000DM003-1CH162 1TB / MIC: Blue Yeti USB / HEADSET: SteelSeries Siberia V2 MSI Dragon Army Limited Edition / REC: Action! & OBS / MOUSE: Logitech T400 / CAM: Logitech HD Pro Webcam C920 / PAD: Xbox One Controller for Windows PC / NET: Unlimited LTE 150/40 Mb/s
Link to comment
Share on other sites

Link to post
Share on other sites

"pci passthrough" isn't a product by itself.  Did you mean to put XenServer, vSphere/ESXi, KVM, or another actual product name there?

 

 

My only question is why did you go with unRAID? Not that it isn't worth it if you actually plan on doing something like this and it's certainly easier to setup, but why not something free such as pci passthrough?

Link to comment
Share on other sites

Link to post
Share on other sites

Depends on if the video editing / recording uses primarily CPU or GPU.  If the former, it'd be possible, but not as nice an experience as dedicating all that hardware to one specific task.  If the latter, it should be pretty decent at it.  If you don't mind your video editing / rendering potentially taking longer than it would if you had dedicated *all* of the hardware to that task, it'd work just fine.  :)

 

As to access both from a single keyboard / mouse -- a product like Synergy would handle that.  There are other products, but that was the first one in the Google search results.

 

For storage access -- how 'smooth' accessing your storage will be entirely depends on what hardware makes up your storage pool.  For a situation like the one you've described, you'll want a dedicated RAID array of nice fast HDD/SDDs.

 

 

@LinusTechTips is it possible to use one of the virtual machines for video editing and video rendering while recording and playing from the another one? how to connect that to be smooth (access to the same drive?) and with use of only one keyboard and mouse over both of them?

Link to comment
Share on other sites

Link to post
Share on other sites

I think using *older* hardware, for the same benefit / features, is acceptable in this case...

As to the challenge being done on identical hardware -- if all of this works on two older Xeon's, there is little reason something similar shouldn't be able to work on what Linus put together, save for the nVidia cards, as those have a poor track record of working with pcipassthrough in ESXi, though I hear they work in XenServer.  I've pretty well always stuck to AMD cards (save for a brief stint with a GTX 570)...

 

Here are my comments regarding your setup.  Please keep in mind, I'm not trying to say what you've done is poor by any means, but want to call out the main differences between what you've done and what unRAID can do, just to highlight some key differences.

 

1 - The hardware matters only because of the GPUs really.  I see that you have only AMD-based GPUs in your system.  While those may work with ESXi just fine, NVIDIA GTX-series GPUs do not work on ESXi.  There is an open thread in the VMWare Communities site talking about this.  While AMD-cards "work", some folks have had easier success than others.  We have enhanced our QEMU/KVM implementation to automatically deploy mechanisms to get around the GTX-specific issues for most devices. There are still some that have issues, but many just work out of the box now.

 

2 - Your storage configuration offers no redundancy (RAID0), which means if any of your disks fail, your entire system comes to a halt.  unRAID is designed as a NAS first and a virtualization host / application server second.  It prioritizes data protection over anything else, which is why you can store data in a parity-protected array of disks or a performance-centric, but still-protected cache pool.  This lets you store data on unRAID from VMs or other devices on your network using traditional network file sharing methods (e.g. SMB).  Virtual disks can live in the cache pool or the parity-protected array.  We recommend OS virtual disks live on the cache pool for enhanced performance whereas you put data-based virtual disks on the array for capacity, but you can ultimately configure it any way you want.

 

3 - The installation / initial setup of ESXi takes longer than unRAID.  You have to set up a VMware account, then download the ISO, burn to CD/USB, boot up, install to another storage device, reboot, then optionally configure network settings from server itself before you can configure VMs.  And at that point, you need a Windows-based device to install the VMWare tools in order to connect to and manage your hypervisor.  With unRAID, you could use a Mac device or even mobile devices to connect to and manage the system.  Remember, we have a browser-based management console.

 

4 - Your WHS2011 server setup is interesting.  I'm curious why you didn't opt to use Microsoft's storage management as opposed to StableBit, but that's besides the point.  So your NAS solution is a ~$50 Windows home server license plus a $30 license for StableBit where you pass through a SATA controller from the host.  That's an interesting setup, but seems a bit complicated to me and WHS2011 is actually going to be End of Life on April 12, 2016, so this isn't really a long-term solution for any newcomers, but I respect why you set it up this way.  The only issue here is that WHS2011 is definitely a more complicated setup than unRAID because you first had to configure a VM with PCI pass through, then install WHS, then install stable bit, then lay out your SMB shares, and I'm not sure about how you'd handle support for other file protocols like NFS or utilize Apple Time Machine features without AFP.  Maybe you didn't have those requirements, but unRAID can handle all of those out of the box because it was designed as a NAS first.

 

5 - Without running benchmarks using the same hardware, I can't say how well your setup performs compared to bare-metal.  I tested this with our solution and you can read about it here, but in short, we achieved 98% of bare-metal performance in 3D Mark using a VM and GPU pass through.http://lime-technology.com/gaming-on-a-nas-you-better-believe-it/

 

6 - There is one major feature that Linus didn't have time to explore in the video which was Docker.  Virtual machines are great and provide a wonderful avenue for interesting setups like GPU pass through, but for things like services applications such as Crashplan, Plex, Sync, Dropbox, and many others, Docker Containers provide an easier and more manageable method.  Containers don't require hardware virtualization support and are crazy-efficient on their use of system resources because they are optimized for Linux.  While ESXi can create VMs, unRAID can spawn both virtual machines and Docker containers on the same host and at the same time.  This is all while also operating as a NAS.

 

In short, your setup is very interesting and I can tell you must have a bit of an IT background in virtualization as I do.  But the sheer amount of effort it would take for someone to recreate the setup you have is orders of magnitude higher than what it takes with unRAID.

Link to comment
Share on other sites

Link to post
Share on other sites

Am I the only one who is wondering if 2 different steam profiles have to be used, also what about the copies of the games you want to play, is only 1 copy needed or 2? 

This was a very interesting video and hope to see more things like this!!

“Does Yggdrasil drink from it because it is the Well of Wisdom, or is it the Well of Wisdom because Yggdrasil drinks from it?” 
― J. Aleksandr WoottonHer Unwelcome Inheritance

CPU Intel 6700k @4.7ghz  Motherboard Asus Rampage Formula VIII RAM 16gb (4x4) Corsair Dominator Platinums GPU Evga Hybrid 980ti

Case ThermalTake X71 Storage 512gb 950 Samsung M.2 PSU 1000Hxi Corsair Cooling 115i Corsair CLC 280mm
Operating System Windows 10 64bit

 

Link to comment
Share on other sites

Link to post
Share on other sites

Am I the only one who is wondering if 2 different steam profiles have to be used, also what about the copies of the games you want to play, is only 1 copy needed or 2? 

This was a very interesting video and hope to see more things like this!!

yes its essentially two computers running in one case (and sharing the same CPU, RAM and maybe HDD)

Link to comment
Share on other sites

Link to post
Share on other sites

yes its essentially two computers running in one case (and sharing the same CPU, RAM and maybe HDD)

Thank you, its times like these that I wish a local steam Guest account would be implemented.

“Does Yggdrasil drink from it because it is the Well of Wisdom, or is it the Well of Wisdom because Yggdrasil drinks from it?” 
― J. Aleksandr WoottonHer Unwelcome Inheritance

CPU Intel 6700k @4.7ghz  Motherboard Asus Rampage Formula VIII RAM 16gb (4x4) Corsair Dominator Platinums GPU Evga Hybrid 980ti

Case ThermalTake X71 Storage 512gb 950 Samsung M.2 PSU 1000Hxi Corsair Cooling 115i Corsair CLC 280mm
Operating System Windows 10 64bit

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thank you its times like these that I wish a local steam Guest account would be implemented.

that's what splitscreen is for

and besides, battlefront was free because it was a beta

it's not on steam,, it's on Origin

Link to comment
Share on other sites

Link to post
Share on other sites

that's what splitscreen is for

and besides, battlefront was free because it was a beta

it's not on steam,, it's on Origin

Agreed, but support for PC splitscreen is rare. I am well aware of battlefront being on Origin and of its free beta.

Thanks for clarifying my first post.

“Does Yggdrasil drink from it because it is the Well of Wisdom, or is it the Well of Wisdom because Yggdrasil drinks from it?” 
― J. Aleksandr WoottonHer Unwelcome Inheritance

CPU Intel 6700k @4.7ghz  Motherboard Asus Rampage Formula VIII RAM 16gb (4x4) Corsair Dominator Platinums GPU Evga Hybrid 980ti

Case ThermalTake X71 Storage 512gb 950 Samsung M.2 PSU 1000Hxi Corsair Cooling 115i Corsair CLC 280mm
Operating System Windows 10 64bit

 

Link to comment
Share on other sites

Link to post
Share on other sites

Replies in-line:

 

Here are my comments regarding your setup.  Please keep in mind, I'm not trying to say what you've done is poor by any means, but want to call out the main differences between what you've done and what unRAID can do, just to highlight some key differences.

 

1 - The hardware matters only because of the GPUs really.  I see that you have only AMD-based GPUs in your system.  While those may work with ESXi just fine, NVIDIA GTX-series GPUs do not work on ESXi.  There is an open thread in the VMWare Communities site talking about this.  While AMD-cards "work", some folks have had easier success than others.  We have enhanced our QEMU/KVM implementation to automatically deploy mechanisms to get around the GTX-specific issues for most devices. There are still some that have issues, but many just work out of the box now.

 

--If you read through the thread (or at least my posts) a bit, you'll see I mentioned this specific issue.

 

2 - Your storage configuration offers no redundancy (RAID0), which means if any of your disks fail, your entire system comes to a halt.  unRAID is designed as a NAS first and a virtualization host / application server second.  It prioritizes data protection over anything else, which is why you can store data in a parity-protected array of disks or a performance-centric, but still-protected cache pool.  This lets you store data on unRAID from VMs or other devices on your network using traditional network file sharing methods (e.g. SMB).  Virtual disks can live in the cache pool or the parity-protected array.  We recommend OS virtual disks live on the cache pool for enhanced performance whereas you put data-based virtual disks on the array for capacity, but you can ultimately configure it any way you want.

--There is no technical reason why I couldn't use RAID1 or 5 in my setup.  It was a choice, on my part, not to do so.  This has no bearing on my original point, and I'm not sure why you brought it up, as I assume you are well aware that I could easily change the RAID level and gain redundancy, if that were my goal.  Also, you're using "cache" incorrectly if you're using a "cache" pool as permanent storage.  I find it interesting that you pick this bit out, and don't mention that, if the Z800 fails, I lose *all* of my systems...

 

3 - The installation / initial setup of ESXi takes longer than unRAID.  You have to set up a VMware account, then download the ISO, burn to CD/USB, boot up, install to another storage device, reboot, then optionally configure network settings from server itself before you can configure VMs.  And at that point, you need a Windows-based device to install the VMWare tools in order to connect to and manage your hypervisor.  With unRAID, you could use a Mac device or even mobile devices to connect to and manage the system.  Remember, we have a browser-based management console.

 

--The installation / initial setup of ESXi takes roughly 10-20 minutes.  You have to do *all* of the steps you mentioned for ESXi with unRAID.  Beyond that, I also previously mentioned the need of a Windows device to add / interact with the VM configurations.  With your setup, you need *a* device (it just needs to have a web browser), so you still have the requirement of a device separate from the hypervisor.  As I previously mentioned, it's extremely unlikely that a person wanting to configure a system like this won't have access to a device with Windows installed (unless said person goes out of their way to avoid this, and I doubt they're who this is targeted at).

 

4 - Your WHS2011 server setup is interesting.  I'm curious why you didn't opt to use Microsoft's storage management as opposed to StableBit, but that's besides the point.  So your NAS solution is a ~$50 Windows home server license plus a $30 license for StableBit where you pass through a SATA controller from the host.  That's an interesting setup, but seems a bit complicated to me and WHS2011 is actually going to be End of Life on April 12, 2016, so this isn't really a long-term solution for any newcomers, but I respect why you set it up this way.  The only issue here is that WHS2011 is definitely a more complicated setup than unRAID because you first had to configure a VM with PCI pass through, then install WHS, then install stable bit, then lay out your SMB shares, and I'm not sure about how you'd handle support for other file protocols like NFS or utilize Apple Time Machine features without AFP.  Maybe you didn't have those requirements, but unRAID can handle all of those out of the box because it was designed as a NAS first.

 

-- My primary goal was to have something that functioned like a "Drobo", *without* the price of a Drobo.  Storage Spaces (or whatever MS is calling it) isn't available in WHS, and isn't as easy / intuitive to set up and use as DrivePool, and in the few minutes I spent with it, I didn't see an easy way to, as an example, remove a smaller drive from the storage pool in order to install a larger drive *without* having to manually move around a boatload of data (I wanted the storage pool to work like a "Drobo").  Also, if I'm not mistaken, you can't take a drive that was part of a Storage Spaces pool and read its data in another system (in the case that the host system died).  

--As to WHS2011 -- that isn't a requirement, it's just what I had laying around.  Before I created this system, I had individual PC's living all around my house.  This system was built to condense them all into a single system.  Since I had WHS2011 installed on a physical system, I simply P2V'd it, then made the VM changes I outlined previously.  As a bonus, my electricity usage noticeably went down, which is always good.  :)

 

-- As to my WHS2011 VM being "more complicated" -- I don't think so.  I either install ESXi, then create the WHS2011 VM (with the proper hardware passed to it), or install unRAID (adding in the extra cost of that software, which comes close to the cost of WHS2011 / DrivePool), then create the WHS2011 VM (with the proper hardware passed to it).  I fail to see how either way is "more complicated" than the other.  The SMB share bit still has to be 'laid out' in either scenario, so I don't understand why you mentioned it, either.  Besides, no guests need access to them (any media they wish to access can be handled through one of two Nexus Players in my house, or they can navigate straight to the Plex web interface).

-- As for file sharing protocols -- that was not mentioned / specified in Linus' video, so why should that somehow matter here?  Besides, Apple systems (which will *never* be a part of my computing family) can recognize / utilize SMB shares.  That all said, if I wanted to cater to non-Windows systems, all I'd have to do is remove the DrivePool software, create a new VM, assign the SATA HBA to it, and install any x86-based NAS product (including unRAID, if I desired), and I'd have that functionality -- all without interrupting *any* of the other VM's on the system (since they're on a separate datastore).

 

5 - Without running benchmarks using the same hardware, I can't say how well your setup performs compared to bare-metal.  I tested this with our solution and you can read about it here, but in short, we achieved 98% of bare-metal performance in 3D Mark using a VM and GPU pass through.http://lime-technology.com/gaming-on-a-nas-you-better-believe-it/

 

-- I can't believe that it'd be significantly different, giving video cards of similar capability (due to the non-nVida nature of ESXi), especially since the host OS (in my case, ESXi) isn't taking care of multiple items in the background like unRAID does...

 

6 - There is one major feature that Linus didn't have time to explore in the video which was Docker.  Virtual machines are great and provide a wonderful avenue for interesting setups like GPU pass through, but for things like services applications such as Crashplan, Plex, Sync, Dropbox, and many others, Docker Containers provide an easier and more manageable method.  Containers don't require hardware virtualization support and are crazy-efficient on their use of system resources because they are optimized for Linux.  While ESXi can create VMs, unRAID can spawn both virtual machines and Docker containers on the same host and at the same time.  This is all while also operating as a NAS.

 

-- I am still of the school of thought that a NAS should do NAS things, and a hypervisor should do hypervisor things.  You know what they say about the "Jack of All Trades", right?

 

In short, your setup is very interesting and I can tell you must have a bit of an IT background in virtualization as I do.  But the sheer amount of effort it would take for someone to recreate the setup you have is orders of magnitude higher than what it takes with unRAID.

 

-- Actually, I have very little background specific to virtualization, at least professionally.  At work I create a master VM image that gets pushed to several hundred ZCB's, and I create the ThinApps that are assigned to those virtual desktops, but that's it.  Everything else I've leaned through Google searches, and specifically from thehomeserverblog.com.  I have been repairing / building / maintaining PC's for more than 20 years, though, which certainly helps me with the hardware portion of all of this.  ;)

I still disagree that my setup is "orders of magnitude" higher / more difficult than what it takes with your product.  In fact, seeing what Linus had to go through in the video, watching some Youtube videos on setting things up in unRAID, and conversing with you, it seems to me that the same amount of effort is needed, it's just the specific tasks are different.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Agreed, but support for PC splitscreen is rare. I am well aware of battlefront being on Origin and of its free beta.

Thanks for clarifying my first post.

Rocket League is a big game with 4 player splitscreen

 

as but I do agree, there should be more splitscreen games, especially on PC, the leading platform for gaming by far

Link to comment
Share on other sites

Link to post
Share on other sites

--If you read through the thread (or at least my posts) a bit, you'll see I mentioned this specific issue.

 

Sorry, must have missed that part.  That said, it's a pretty big miss considering NVIDIA has the majority share of the gaming GPU market.  

 

 

--There is no technical reason why I couldn't use RAID1 or 5 in my setup.  It was a choice, on my part, not to do so.  This has no bearing on my original point, and I'm not sure why you brought it up, as I assume you are well aware that I could easily change the RAID level and gain redundancy, if that were my goal.  Also, you're using "cache" incorrectly if you're using a "cache" pool as permanent storage.  I find it interesting that you pick this bit out, and don't mention that, if the Z800 fails, I lose *all* of my systems...

 

I was simply calling out a design difference in how you proposed your system as opposed to how unRAID works natively.  I agree, you could have choose differently, but you didn't, so I simply highlighted that.  In addition, a hardware-based RAID like this is far less portable.  With unRAID, you could literally buy all new hardware except your storage devices, move them over, start it up, and everything picks up right where it left off.  Your hardware-based RAID solution doesn't offer that kind of portability.

 

Also, the reason we call it a cache pool is because the primary function is to cache write operations from the parity protected array, then move them over to the array at a later time so as to improve write performance.  That is it's primary function and why we refer to it as such.  The ability to force data to live on the cache pool is just a feature, but I agree, IT terminology can be confusing in general, so I don't hold you at fault for this misunderstanding.

 

 

--The installation / initial setup of ESXi takes roughly 10-20 minutes.  You have to do *all* of the steps you mentioned for ESXi with unRAID.  Beyond that, I also previously mentioned the need of a Windows device to add / interact with the VM configurations.  With your setup, you need *a* device (it just needs to have a web browser), so you still have the requirement of a device separate from the hypervisor.  As I previously mentioned, it's extremely unlikely that a person wanting to configure a system like this won't have access to a device with Windows installed (unless said person goes out of their way to avoid this, and I doubt they're who this is targeted at).

 

Have you tried unRAID?  It's not the same setup at all.  You plug a USB stick into a system, copy the files to it (or you can buy one pre-configured from us), and then you boot it up.  There is no "install" to another device at that point.  The flash device is already the installed version of unRAID.

 

As far as needing a separate device from the hypervisor, you can buy the USB pre-configured flash from us to avoid even needing an x86 computer at all for configuration (just do it over a tablet/smartphone using a browser).  You also could do the entire thing from a Mac OS X device which you can't do with VMWare.  Even VMWare fusion doesn't have the same management options as the Windows-based client.

 

As far as someone wanting to do this already having a separate Windows-based system, our aim is to remove the need for that second system altogether.  One master system with proper resource partitioning.

 

 

-- As to my WHS2011 VM being "more complicated" -- I don't think so.  I either install ESXi, then create the WHS2011 VM (with the proper hardware passed to it), or install unRAID (adding in the extra cost of that software, which comes close to the cost of WHS2011 / DrivePool), then create the WHS2011 VM (with the proper hardware passed to it).  I fail to see how either way is "more complicated" than the other.  The SMB share bit still has to be 'laid out' in either scenario, so I don't understand why you mentioned it, either.  Besides, no guests need access to them (any media they wish to access can be handled through one of two Nexus Players in my house, or they can navigate straight to the Plex web interface).

 

Well, I'm comparing to someone who is not familiar with setting up the things you've done here.  I don't think many would know how to use all the VMware tools nor how to setup WHS2011.  It also required the extra software you mentioned.  Just seems like lots of layered technology whereas with unRAID, we deliver all that functionality out of the core OS itself.

 

 

-- As for file sharing protocols -- that was not mentioned / specified in Linus' video, so why should that somehow matter here?  Besides, Apple systems (which will *never* be a part of my computing family) can recognize / utilize SMB shares.  That all said, if I wanted to cater to non-Windows systems, all I'd have to do is remove the DrivePool software, create a new VM, assign the SATA HBA to it, and install any x86-based NAS product (including unRAID, if I desired), and I'd have that functionality -- all without interrupting *any* of the other VM's on the system (since they're on a separate datastore).

 

The main reason is to call out the extra stuff unRAID can do over just ESXi by itself.  And while you may not have Apple in your computing family, there are many that do and would love to utilize their NAS for Time Machine functionality.  And while yes, you could install another NAS OS as a VM, again, that's another product / solution you have to install and master.  There are lots of layers of technology involved from multiple vendors.  If you want something that acts as a NAS out of the box and offers a simple way to pool/manage storage, but also want to consolidate your desktop PC into it as well, ESXi is a much more complicated animal given that you have to master it and another system for storage management.  It's not as simple as "just install a NAS OS and presto, I have SMB shares."

 

 

-- I can't believe that it'd be significantly different, giving video cards of similar capability (due to the non-nVida nature of ESXi), especially since the host OS (in my case, ESXi) isn't taking care of multiple items in the background like unRAID does...

 

It will be significantly different.  First, ESXi requires that you have an emulated graphics device in addition to the passed through device, which makes the emulated graphics the primary graphics.  It also means performance overhead for ESXi and 3D graphics compared to native pass through with QEMU/KVM that we're doing on unRAID.  With unRAID, we can specify an option to the hypervisor on unRAID to not create any emulated graphics adapter, and then let the GPU naturally take that over itself.  This means you install your Windows VM itself on the monitor, not through a remote VNC session.

 

Also, unRAID isn't doing much in the background and you can completely isolate CPU cores for NAS services from VMs, which allows for things like what Linus did to be possible (eliminating context-switching).

 

There are many who talk about poor performance with PCI pass through for gaming on VMware and other solutions.  The methods built into QEMU/KVM are definitely cutting edge, support a wider array of GPUs, offer near bare-metal performance, and with unRAID, take much less time to configure.

 

 

-- I am still of the school of thought that a NAS should do NAS things, and a hypervisor should do hypervisor things.  You know what they say about the "Jack of All Trades", right?

 

I do know, but you're missing the point.  Your system is doing all those things just the same, so isn't it still a jack of all trades?  Your argument is that the layer that controls creating VMs should be isolated from the layer to manage storage.  My argument is that it's not necessary and virtualizing just for the sake of virtualizing doesn't make sense to me.  Look up the trends in IT convergence and you'll see even the big boys are moving towards a model of converging hardware appliances into less and less physical equipment.  We are simply offering that same capability at a consumer/prosumer scale.

Link to comment
Share on other sites

Link to post
Share on other sites

"pci passthrough" isn't a product by itself.  Did you mean to put XenServer, vSphere/ESXi, KVM, or another actual product name there?

 

No because it isn't a platform/product specific feature, which is exactly the reason why I merely said pci passthrough by itself... so obviously you can use it on whatever hypervisor you wish (as long as it's supported of course, oh and I believe you also need a 3.x kernel to support it as well but I don't really know about that).

Link to comment
Share on other sites

Link to post
Share on other sites

I assume 2 windows 10 installs means 2 license keys?

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

I assume 2 windows 10 installs means 2 license keys?

Correct.

“Does Yggdrasil drink from it because it is the Well of Wisdom, or is it the Well of Wisdom because Yggdrasil drinks from it?” 
― J. Aleksandr WoottonHer Unwelcome Inheritance

CPU Intel 6700k @4.7ghz  Motherboard Asus Rampage Formula VIII RAM 16gb (4x4) Corsair Dominator Platinums GPU Evga Hybrid 980ti

Case ThermalTake X71 Storage 512gb 950 Samsung M.2 PSU 1000Hxi Corsair Cooling 115i Corsair CLC 280mm
Operating System Windows 10 64bit

 

Link to comment
Share on other sites

Link to post
Share on other sites

Nah Intel gives it to them for free in bulk.

Yeah, I know. I was just joking about how he uses them all the time. I would too if I got them for free.

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×