Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

FreeNAS SSD Pool slow iscsi performance

Afternoon all, 


Making good progress on my home server which is to run several VM's i.e. Gaming PC/VPN Sandbox PC/CCTV/FreeNAS/ etc


Today I've completed creating several Pools in FreeNAS. One of the Pools is x4 256GB Samsung SSD's striped for around a 800GB Pool. This pool is then setup for iscsi to my gaming VM. Was going to be used for Steam Games / Applications.


The FreeNas and Gaming Machine are both VM's on the same ESXi 6.7 Host.


Transfer speeds using DiskMark come back at only 300-350MB/sec? This x4 SSD Striped arrange when previoulsy setup using Windows Storage Spaces was easily 1.5GB/sec. What am I missing?


The FreeNAS VM has x4 vCPU's E5-2696 v4 and 32GB RAM 2400mhz DDR4


If I cant improve the performance I'll just revert back to passing the drives through and use Storage Spaces again. I previously had the drives formatted as ReFS and it turns out Oculus store requires a NTFS for some unknown reason.


I just also wanted the benefit of ZFS and to learn more about using iscsi

Link to post
Share on other sites

What kind of Networking devices are you using? Try running iPerf to see what theoretical throughput you get and narrow down the bottleneck. 

Gaming HTPC:

R7 1700X - Scythe Big Shuriken 3 - Asus ROG B350-i - Asus GTX 1080 Strix - 16gb G.Skill Ripjaws V 3333mhz - Silverstone SX500-LG - 500gb 960 EVO - Fractal Design Node 202 - Samsung 60KS7090 - Logitech G502 - Thrustmaster T500RS - Noblechairs Icon

Desktop PC:
R9 3900X - H100i GTX - Asus Prime X570 Pro - EVGA RTX2060KO - 32gb LPX 3200mhz - EVGA 750G2 - 250gb 970 Evo - 6TB WD My Book Duo (Reds) - Inwin 103 White - Dell U3415W - Qpad MK-85 Brown - Logitech MX518 Legendary - Blue Yeti Platinum - Noblechairs Icon 

Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)


Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479


Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to post
Share on other sites
5 hours ago, FloRolf said:

What kind of Networking devices are you using? Try running iPerf to see what theoretical throughput you get and narrow down the bottleneck. 

Would it not all just be the stock VMware esxi virtual network adapters as they are both just VM's within ESXi?


Will I have to look at multipath and create several virtual NICs to improve the performance?


Also a correction about it wasn't using Storage Spaces that netted me 1.5Gb/s I was getting mixed up with a different pool. Those speeds were achieved via the motherboard onboard raid controller.

Link to post
Share on other sites

Ok to rule out network adapters etc, I took a different approach. I passed through the x4 SSD Drives through to the Gaming VM on a ESXi NVME Virtual Controller. Striped them using Windows Disk Manager and assigned a Drive "D:\" and got the speeds up to 790mb/sec still short of 1.5gb/sec


The NVME drive the Gaming VM runs (c:\) from has throughput of 3.5gb/sec so its not like the throughput cant be achieved within a VM.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now