Jump to content

I ran out of pci slots so i had to move one of my ssds (games install folder) to my Dell server which is an iSCSI DAS and random reads just plummeted. Sequentials are okay-ish (writes not really) and I do have full link bandwidth, but if i measure random speeds on the target vhd disabled and mounted on the server is already half of what the ssd does normally, and from the initiator through iSCSI it is half of that, so it's quarter speeds or worse.

The setup is kind of ancient, server is a Poweredge T420, cards are Mellanox ConnectX-2s (Linus featured them, i bought them right away and running them since that many years) and the ssd is a FusionIO IoDrive2 1.2TB pcie io accelerator, so it was literally made for IOPS聽馃榾

I have no idea if this is normal (read somewhere that because the vhd is an NTFS filesystem on an other NTFS filesystem that hurts the performance, well back in the day these ssd could be mounted directly as block level target with their unobtainable ION Data Accelerator software, i guess there was a reason for that) or i misconfigured something or the network can't keep up somehow. I ran iperf back in the day and it seemed okay. I could still split the SSD in the firmware, they have some virtual controller solution and that promises +80% IOPS but if the network can't handle it wouldn't help.

Link to post
Share on other sites

Can you show something like crystal disk info of your system to see how much performance your getting?

I pretty easily get 10k+ qd1 4k iops on a quick and dirty 10gbe setup, and I don't see you posting any numbers here for your setup.

Link to post
Share on other sites

Now that i checked it out a little better iops were not good to begin with on the ssd.

These are the numbers measured directly on the card on the server:

nativessd.thumb.png.3dec32f4a15205a4a58012487863b50b.png

And this is the iSCSI performance:

xccfrbs.thumb.png.fb534ad63d66798d91123f5630943057.png

Something's up with it because 4K QD16 read should be 250k, some measured even 330K QD64. Not sure what's going on, maybe a pci power problem, the card is not in a 75W slot currently, also the card is not set to pci power override (some FusionIo cards need that or a cable). I have to investigate a bit.

Link to post
Share on other sites

22 hours ago, vkristof01 said:

Now that i checked it out a little better iops were not good to begin with on the ssd.

These are the numbers measured directly on the card on the server:

nativessd.thumb.png.3dec32f4a15205a4a58012487863b50b.png

And this is the iSCSI performance:

xccfrbs.thumb.png.fb534ad63d66798d91123f5630943057.png

Something's up with it because 4K QD16 read should be 250k, some measured even 330K QD64. Not sure what's going on, maybe a pci power problem, the card is not in a 75W slot currently, also the card is not set to pci power override (some FusionIo cards need that or a cable). I have to investigate a bit.

What does CPU usage look like on both ends when moving files? Any single thread limits you see?

Those fusionIO cards were fast for there time, but there not that impressive now.

What CPUs do you have in there? Try taking one out to remove possible numa issues too.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now