Jump to content

iSCSI Link Aggregation

Helly
Go to solution Solved by leadeater,
17 minutes ago, Helly said:

I did some googling and ran into MPIO. But that looks like it works by having multiple connections between the server and SAN. Not what i can do here, don't have more connections available really.

MPIO is what you want and is the only way which works correctly and will give you 4Gbps bandwidth read and write. On the SAN give each port their own IP, on the server install and enable MPIO then connect to the iSCSI target and enable multi-path. After this you need to add 3 more connections to the SAN but you need to specify initiator IP and target IP pairs to there is 4 unique pair paths to the SAN, one to many endpoint to destination (fan out). Make sure the multipath mode is Active-Active Round Robin.

 

You only need the 4 ports going to the switch on the SAN, server can use the single 10Gb just fine.

So i have gotten a SAN (Promise VessRAID 1830i) from work which supports 12 drives and doesn't make all that much noise. I've gotten the thing up and running now and have connected it to my 10Gbit switch with 4 cables. The SAN itself supports 1Gbit per port. My server which is connected to the same switch supports 10Gbit. So i hoped, it could use the SAN with 4Gbit. I have my doubts that it even works that way but that's why i'm here.

I copied over a 34GB file and the explorer window said it was copying at 700+MB/s. At first i thought yay it's working, but when it "finished" i noticed the network was copying at around 110MB/s and kept going after the copying window was long gone. The lights on the switch only blink for port 1 so i'm guessing its not working.

 

I did some googling and ran into MPIO. But that looks like it works by having multiple connections between the server and SAN. Not what i can do here, don't have more connections available really. Then there's also the fact that the SAN itself seems to be bundling the connections itself. Because when i turn on the link aggregation (master port 1 and 2,3,4 as slaves), all ports show up as having the same IP.

 

So the question here is, is what i want even possible? If so, then how? This is my first experience with iSCSI and getting it to even connect was hard enough, and i doubt i can do it again if i reset it :P. I honestly barely know what i'm doing, so please help XD.

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Helly said:

I did some googling and ran into MPIO. But that looks like it works by having multiple connections between the server and SAN. Not what i can do here, don't have more connections available really.

MPIO is what you want and is the only way which works correctly and will give you 4Gbps bandwidth read and write. On the SAN give each port their own IP, on the server install and enable MPIO then connect to the iSCSI target and enable multi-path. After this you need to add 3 more connections to the SAN but you need to specify initiator IP and target IP pairs to there is 4 unique pair paths to the SAN, one to many endpoint to destination (fan out). Make sure the multipath mode is Active-Active Round Robin.

 

You only need the 4 ports going to the switch on the SAN, server can use the single 10Gb just fine.

Link to comment
Share on other sites

Link to post
Share on other sites

I think you mean NAS, a SAN is a block storage network, so generally at least 2 units with some form of switching. 

 

Anyway, that's not how LAG/Port-bonding works. You can bond them together so you have a single IP interface, however TCP will only transmit over 1 path at a time, so for a single transfer you are limited to 1Gbit, however you have 4 paths so you could have 4 different transfers all hitting it at 100MB/s at the same time. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Jarsky said:

, a SAN is a block storage network,

Which is exactly what iscsi is.

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Jarsky said:

I think you mean NAS, a SAN is a block storage network, so generally at least 2 units with some form of switching. 

 

Anyway, that's not how LAG/Port-bonding works. You can bond them together so you have a single IP interface, however TCP will only transmit over 1 path at a time, so for a single transfer you are limited to 1Gbit, however you have 4 paths so you could have 4 different transfers all hitting it at 100MB/s at the same time. 

Protocol being used is iSCSI though so it's SANish. Either way LACP doesn't do what many people expect and iSCSI specifically should not use LACP/bonding when you have the option to use MPIO.

Edited by leadeater
Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

MPIO is what you want and is the only way which works correctly and will give you 4Gbps bandwidth read and write. On the SAN give each port their own IP, on the server install and enable MPIO then connect to the iSCSI target and enable multi-path. After this you need to add 3 more connections to the SAN but you need to specify initiator IP and target IP pairs to there is 4 unique pair paths to the SAN, one to many endpoint to destination (fan out). Make sure the multipath mode is Active-Active Round Robin.

 

You only need the 4 ports going to the switch on the SAN, server can use the single 10Gb just fine.

I think i set it up right. Followed some more specific guides for MPIO. For multiple connections i assumed i needed to use MCS. But i'm still getting unexpected results. Namely a max of around 170MB/s. All the ports are blinking so it's an improvement. But i think it could be better. Below is a screenshot of the settings. Anything i might have missed?

Or is the 170 just going to be the max for me?

 

iscsi-props.thumb.png.16b6ae58d102555d09917b3cc8f23265.png

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

Looks like you've got it setup correctly, 170MB/s might be the maximum you can get from that SAN with those disks. Seems a bit low though. Maybe as a test reconfigure the array to RAID 0. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, leadeater said:

Looks like you've got it setup correctly, 170MB/s might be the maximum you can get from that SAN with those disks. Seems a bit low though. Maybe as a test reconfigure the array to RAID 0. 

I thought about that. But it took 3 days to initialize the raid6. So i guess ill take the loss and keep it running like this. Thanks for the help.

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Helly said:

I thought about that. But it took 3 days to initialize the raid6. So i guess ill take the loss and keep it running like this. Thanks for the help.

I had a thought, maybe there is a battery cache unit in the SAN and the battery is faulty. The performance you're getting is typical of a system with non functioning write-back cache which is what happens when the battery fails. Would be worth opening it up and looking to see if there is a battery.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/28/2019 at 11:52 AM, leadeater said:

I had a thought, maybe there is a battery cache unit in the SAN and the battery is faulty. The performance you're getting is typical of a system with non functioning write-back cache which is what happens when the battery fails. Would be worth opening it up and looking to see if there is a battery.

Nope, there's no battery in it. The web interface says there's no battery detected and i've been told there never was one. So unfortunately that's not the cause. Unless no battery gives the same result. Never had that problem with my other raid cards though. Never had a battery for them either. Would it be different for each brand of card/system?

I have no signature

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Helly said:

Unless no battery gives the same result.

If there is a connector for a battery and there isn't one then yes. Higher end RAID cards have battery connectors and write-back cache so when you're using RAID 5/6 you get around 800MB/s write performance, without the battery/write-back disabled 80-200 MB/s write.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×