Jump to content

Slow drive speeds on Windows 10 VM in UnRAID

Hey guys I am putting together a computer that is functionally similar to the one Linus did in his Gaming NAS video, with a UnRAID base and a copy of Windows 10 running as a VM. The issue that I am having is that the C: drive on the VM is SLOW! It takes several minutes for the VM to start up, and is unreasonably slow when loading things. Any idea what might cause this. I feel like I am not giving any details, but frankly I don't know what details to give, specs below.

 

One other thing that I am unsure on how to do is that I want to pass the AVermedia Live Gamer HD to the VM so that I can capture stuff, but I don't know how to do that.

 

For hardware I have;

Motherboard: Tyan S7012

CPU: 2X Intel Xeon X5570 4c8t (6 cores and 6 threads for Windows VM)

RAM: 16GB Micron DDR3 ECC 4X4GB 1333MHz (12GB for Windows VM)

GPU: ASUS 1070 STRIX OC (for Windows VM)

PSU: Corsair RM650X (I know cutting it a bit close)

Storage: 3X2TB with 1 of the 3 drives for redundancy Also 70GB Corsair FORCE SSD for Cache

I also have some random USB 3.0 PCI-E card and a AVermedia Live Gamer HD

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm gonna have to guess it's because of booting off the hard drive (?).

Want to know which mobo to get?

Spoiler

Choose whatever you need. Any more, you're wasting your money. Any less, and you don't get the features you need.

 

Only you know what you need to do with your computer, so nobody's really qualified to answer this question except for you.

 

chEcK iNsidE sPoilEr fOr a tREat!

Link to comment
Share on other sites

Link to post
Share on other sites

That sounds pretty likely to me as well. Are you sure you're booting off the cache?

 

What version of unRAID are you running?

Link to comment
Share on other sites

Link to post
Share on other sites

If possible can you do a test Windows install by dedicating a single HDD to it then again for the SSD? Might be rather large parity performance hit for some reason even with the SSD cache.

 

Also make sure you have all the latest firmware installed and VT-x/VT-d features enabled.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, LinusTech said:

That sounds pretty likely to me as well. Are you sure you're booting off the cache?

 

What version of unRAID are you running?

Ive booted off of RAID before and it has never been this slow, I am talking 7-10 minutes for Windows to start. I am using UnRAID Pro 6.2.4 stable. The "Use cache disk" setting on the Domains share is set to "Prefer"

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

If possible can you do a test Windows install by dedicating a single HDD to it then again for the SSD? Might be rather large parity performance hit for some reason even with the SSD cache.

 

Also make sure you have all the latest firmware installed and VT-x/VT-d features enabled.

I just moved and I don't have any spare HDD's out to test that with, but I might be able to get one tomorrow.

 

As far as the Firmware goes, I didn't update the motherboard's BIOS when I got it off eBay so there could be a newer option. Also I have the VT-x and VT-d enabled in the BIOS

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

Actually paying attention this time I think that it is more like 15-20 minutes to turn on the computer

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

Ok this time it took less than 2 minutes to hit the user login and only about another minute to start, much more reasonable. Not sure why it varies soo much.

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

That seems to have worked itself out, ill let you know if things change. But I could still use some help connecting the HDMI capture device to the VM.

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

Ok, I assume you have an SSD cache installed.

Setup a new share called "VM-C-drive" and set it to "cache only", then add that as path to your VM, and install windows on that one. If it still takes that long 

7 hours ago, Justin_ said:

Ive booted off of RAID before and it has never been this slow, I am talking 7-10 minutes for Windows to start. I am using UnRAID Pro 6.2.4 stable. The "Use cache disk" setting on the Domains share is set to "Prefer"

 "Booting off of RAID" sounds strange to me, could you explain your setup in more detail?

And as I said, try Cache only

Link to comment
Share on other sites

Link to post
Share on other sites

Hi Justin and thanks for giving unRAID a chance!  I think your issue may be that your vdisk is located on the array instead of the cache.  This is happening most likely because the vdisk you created for Windows was larger than the amount of free space on your cache device.

 

By default, unRAID creates a share to store virtual disks called "domains" and sets the cache policy to "prefer".  The prefer cache mode works quite simply by trying to write all data to the cache by default (and keep it there as persistent storage for accelerated access), but in the event a file cannot be created there (due to insufficient space for example), the file will then be written to an array disk instead.  Here's where the performance issue comes into play.

 

In traditional RAID-based solutions, data for individual files is spread across multiple disks in the array as its being written.  This allows for accelerated writes and reads because multiple disks are able to provide IO streams at the same time.  In addition, parity data is also spread across multiple disks in the array at the same time.  This allows you to gain performance using RAID across the board.  Some downsides to these traditional types of RAID methods are that too many disk failures will result in 100% data loss and expanding your RAID-group can be difficult and/or expensive.  In addition, traditional RAID groups typically require all disks to be of the same size, speed, brand, and protocol in order to operate correctly and when disks fail, it is important to replace them with nearly if not exactly identical replacements.

 

unRAID doesn't manage RAID in that fashion.  Instead, we write individual files to individual disks.  This means that if you wanted to, you could yank a drive out of your array, plug it into any Linux-capable system and read the data off it directly.  This also means that in the event of losing even all but one of your data disks, you could still retrieve all the data that is on that remaining disk.  At the risk of sounding like the late Billy Mays, "but wait!  there's more!"  You also can mix drives of different sizes, speeds, brands, and protocols and when a drive failure occurs, you can replace the failed drive with a different device type (same size or larger) no problem!

 

The reason we are able to do all this while protecting your data is because we trade raw write performance in exchange for flexibility through the use of dedicated parity devices.  Dedicated parity means that instead of comingling parity data with user data, we dedicate individual storage devices to storing nothing but parity data.  This means whenever you write data to the array, we also need to update the parity devices.  This limits overall write performance to the array directly, which is why the cache pool exists as a feature.

 

By creating a cache pool, you can use a smaller volume of devices in a more traditional RAID setting to accelerate write performance to shares.  Then at a later time (3:40 AM PST by default), unRAID moves the files from the cache to the array automatically.  From your perspective, the files always appear in the same share, but in reality, they start in the cache pool and are later moved to the array.  In addition to just accelerating the writing of files temporarily, the cache can also be used for persistent storage of performance-hungry files like virtual disks and applications/metadata.

 

To solve your performance issues, I would suggest increasing the size of your cache dramatically.  You should size your cache to be the total size you want available for your vdisk storage + your temporary file caching needs.  So if you say, "I write about 20-30GB of new files per day to the array that I want to accelerate and I want enough space in my Windows VM for 150GB of apps and games," you should probably get a 200GB SSD at the minimum for your cache pool.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, limetechjon said:

--SNIP--

Ok that definitely sounds possible, and I think would explane why the speed varies.

 

I dont have the cash for it now, but I will save up so that I can throw some better SSD's in the system so that I can have the VM completely on SSD cache.

 

One question, would I get better performance with a single bigger drive, or two smaller drives? Basically should I go with 1X1TB or 2X500GB?

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

I just looked at my drives to view usage and it looks like the SSD is an unused drive. That may be because of me, I just switched the drives and UnRAID to a new box. Anyway I am going to get that re-setup as the cache drive for now.

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

Ok... So I re added the SSD as a cache drive, then went to turn my VM back on and in the VM's tab UnRAID is telling me "No Virtual Machines Installed"

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

Ok when I go to the shares tab the shares "Domains" and "System" have the orange triangle and say "Some or all files unprotected". Could it just be trying to rebuild something before it will let me access the VM's?

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

So I tried removing the SSD cache drive as a test, and as soon as I started the array the VM booted up (with autostart on). Any idea what is going on?

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

sorry to digress but do you mind share what case did you choose? I'm interested because I have the same mobo

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, binzhu1070 said:

sorry to digress but do you mind share what case did you choose? I'm interested because I have the same mobo

I went with this one https://www.amazon.com/gp/product/B0091IZ1ZG/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

But it is a 4U server case, not a standard tower.

Please "Quote" me if you want me to see your response.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/7/2017 at 5:11 PM, Justin_ said:

One question, would I get better performance with a single bigger drive, or two smaller drives? Basically should I go with 1X1TB or 2X500GB?

It's always a balance exercise of Data Protection vs Performance. You could install a single decent SSD (e.g. 850 EVO) 1TB in the cache pool and you would get decent performance. Although your mobo is SATA II so if you aren't using an HBA card then the bandwidth will be limited to 3Gb/s. If that's the case you should buy a cheaper SSD and that would probably be "good enough". The cache pool is close to native SSD performance if you aren't using more than 1 drive, especially if you pass it through the VM directly (although I don't think it's really necessary to do so in most cases anyway). You can use 2 drives in a RAID-0 equivalent mode for better performance at the increased risk of losing everything on the cache pool. Some people use that configuration and I think they have cron jobs to backup their stuff to the array once a day or something like that, which is an option.

 

I use 2 drives in the cache pool in the default RAID1 config and it's still fairly quick (only takes a few seconds to boot the VM to Windows). I do tweak the Windows installation with all the possible tricks though (see SpaceInvaderOne / Gridrunner videos).

 

In your case, I would recommend a single mainstream SSD with 512GB of space (or more if you are thinking of running tons of VMs or you are using the cache pool to transfer huge files during the day and absolutely need the performance), that would probably be good enough for your needs (something like the Crucial MX300), and wuld be simple to setup. Just make sure that when you enable the cache pool with your drive, that the vdisks for your VMs are actually moved to the SSD.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×