Jump to content

How many drives on a single HBA or SFF 8087 port using SAS expanders before individual hard drives cannot reach their maximum speeds?

DeS_2002

Idk if this belongs here but here goes.

 

Sorry if this question is silly. I'm kind of a networking and storage noob.
 
So I recently got interested in storage servers and learned about SAS expanders and stuff.
 
I didn't know that SAS expanders were a thing, and I thought that if you wanted to connect 24 drives, you needed an HBA with 3 SFF8087 (one breaks out to 4 sata cables) connectors.
 
But now I've been looking at 60 bay disk shelfs like this, and they usually have QSFP+ or sff8087 ports on the back, and use SAS expanders.
I've learned that SAS expanders can be thought of like network switches, and they can distribute the bandwidth.
 
My question is this: let's say I got one of those 60 bay disk shelves, and ran one mini sas cable to my HBA on my server. Would individual drives be bottlenecked?
 
When an HBA like this is 6Gbps, does it mean that all the SAS ports are 6Gbps each or is the entire card running at 6Gbps?
 
Is the total bandwidth of the card the total bandwith of x8 pcie slot?
 
Seagate Exos enterprise 16TB drives advertise a max transfer rate of 260 MBps. if each mini sas sff 8087 cable has a max theoritical bandwidth of 6Gbps, can it run about 3 of these drives and adding more drives to one cable would mean each drive cannot perform at it's max?
 
This 60 bay disk shelf, for example, has 8 (looks like QSFP+ , idk I've never seen one in person) ports, so should I get two HBAs with 4 ports each so that I can connect to all of those 8 ports?
Would connecting to just 4, for example starve the individual drives of bandwidth and they won't be able to perform to their max speeds?
 
How about connecting just one cable?
 
I'm trying to find out whether it's just simple match, i.e, drive max speeds, and sas cable max bandwidth, and you don't run more drives off of one cable when the individual drives get bandwidth limited.
 
Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, DeS_2002 said:

When an HBA like this is 6Gbps, does it mean that all the SAS ports are 6Gbps each or is the entire card running at 6Gbps?

Each SAS port is 4 lanes so for a SAS 6Gb generation card that is 4 x 6 = 36Gbps bandwidth per port. So a 2 port 6Gb HBA would have a total of 72Gbps, or 9GB/s.

 

Now this is where it gets more complicated, SAS and SAS disks support dual paths to disks however SATA does not. So for a 60 disk shelf to work correctly you need to have dual SAS expander modules in the chassis (the shelf itself) and you have to use SAS disks. Those can be 7200 RPM NL-SAS (SATA class but with SAS interface) but you cannot actually use SATA otherwise two situations can happen

  • You'll only have connectivity to half the disks
  • If one of the SAS cables gets unplugged then half the disks will be disconnected

SAS interface disks however will continue to operate with 1 cable and you can be sure all of them will be connected.

 

This is because each expander feeds half the shelf on the primary port and the second half on the secondary port. Since SATA interface does not electrically connect to the second data port there is no connection on this path. The reason for this is shelf redundancy so if one of the expanders fails the other has a connection to all the disks in the self and none will go offline.

 

The above applies to any shelf with expanders regardless of the number of drive bays. Smaller bay shelves will only have 1 SAS Data port per expander rather than 2.

 

Also the good news for you is the drive trays for that IBM shelf have SATA to SAS converters by the looks of it.

 

1 hour ago, DeS_2002 said:
This 60 bay disk shelf, for example, has 8 (looks like QSFP+ , idk I've never seen one in person) ports, so should I get two HBAs with 4 ports each so that I can connect to all of those 8 ports?
Would connecting to just 4, for example starve the individual drives of bandwidth and they won't be able to perform to their max speeds?

Each SAS expander module has 2 data ports and 2 expander ports (these connect to more shelves). So best practice would be 2 HBA's with 2 ports.

 

HBA 1:

  • Port 1 to shelf expander 1 Data Port 1
  • Port 2 to shelf expander 2 Data Port 2

HBA 2:

  • Port 1 to shelf expander 2 Data Port 1
  • Port 2 to shelf expander 1 Data Port 2

This only applies to a single shelf, if you add more the cabling guide will change.

 

1 hour ago, DeS_2002 said:

Seagate Exos enterprise 16TB drives advertise a max transfer rate of 260 MBps. if each mini sas sff 8087 cable has a max theoritical bandwidth of 6Gbps, can it run about 3 of these drives and adding more drives to one cable would mean each drive cannot perform at it's max?

Those are best case for pure sequential transfer. Generally it's best to account for 80-100 MB/s per HDD.

 

Either way there is so much bandwidth and other factors for performance involved there is no reason to worry about the SAS bandwidth to the server system or to each drive. You'll be limited in other areas anyway so it's not worth worrying about. You won't be getting 9 GB/s performance out of your server or HDDs no matter how much you try let alone 18 GB/s of a 4 port shelf.

Edited by leadeater
Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, leadeater said:

Each SAS port is 4 lanes so for a SAS 6Gb generation card that is 4 x 6 = 36Gbps bandwidth per port. So a 2 port 6Gb HBA would have a total of 72Gbps, or 9GB/s.

 

Now this is where it gets more complicated, SAS and SAS disks support dual paths to disks however SATA does not. So for a 60 disk shelf to work correctly you need to have dual SAS expander modules in the chassis (the shelf itself) and you have to use SAS disks. Those can be 7200 RPM NL-SAS (SATA class but with SAS interface) but you cannot actually use SATA otherwise two situations can happen

  • You'll only have connectivity to half the disks
  • If one of the SAS cables gets unplugged then half the disks will be disconnected

SAS interface disks however will continue to operate with 1 cable and you can be sure all of them will be connected.

 

This is because each expander feeds half the shelf on the primary port and the second half on the secondary port. Since SATA interface does not electrically connect to the second data port there is no connection on this path. The reason for this is shelf redundancy so if one of the expanders fails the other has a connection to all the disks in the self and none will go offline.

 

The above applies to any shelf with expanders regardless of the number of drive bays. Smaller bay shelves will only have 1 SAS Data port per expander rather than 2.

 

Also the good news for you is the drive trays for that IBM shelf have SATA to SAS converters by the looks of it.

 

Each SAS expander module has 2 data ports and 2 expander ports (these connect to more shelves). So best practice would be 2 HBA's with 2 ports.

 

HBA 1:

  • Port 1 to shelf expander 1 port 1
  • Port 2 to shelf expander 2 port 2

HBA 2:

  • Port 1 to expander 2 port 1
  • Port 2 to expander 1 port 2

This only applies to a single shelf, if you add more the cabling guide will change.

 

Those a best case for purse sequential transfer. Generally it's best to account for 80-100 MB/s per HDD.

Thanks so much for such a detailed response.

This cleared things up a lot.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, leadeater said:

Each SAS port is 4 lanes so for a SAS 6Gb generation card that is 4 x 6 = 36Gbps bandwidth per port. So a 2 port 6Gb HBA would have a total of 72Gbps, or 9GB/s.

So I have another question.

If for SAS 6Gbps, each lane is running at 6Gbps, and each sff 8088 port can carry four lanes, then a quad port HBA would carry 16 lanes, or 96Gbps.

This quad port HBA is a pcie 3.0 x8 card, and pcie 3.0 x8 slots have a bandwidth of 8GBps or 64Gbps, so how does that work?

The pcie slot doesn't provide enough bandwidth for all the ports if they were to run at their max theoritical speeds?

 

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, DeS_2002 said:

So I have another question.

If for SAS 6Gbps, each lane is running at 6Gbps, and each sff 8088 port can carry four lanes, then a quad port HBA would carry 16 lanes, or 96Gbps.

This quad port HBA is a pcie 3.0 x8 card, and pcie 3.0 x8 slots have a bandwidth of 8GBps or 64Gbps, so how does that work?

The pcie slot doesn't provide enough bandwidth for all the ports if they were to run at their max theoritical speeds?

SAS HBAs and RAID card are typically x8 wide so 7.88Gbps * 8 = 63 Gbps per card. This is one of the reasons why you don't want a quad port HBA but rather 2 dual port HBA's.

 

You're still way above the performance that 60 HDDs can give anyway, you'll get at most like 5GB/s performance and with a single card you have 7.88GB/s of connectivity back to the system through the PCIe slot.

 

But seriously you won't ever be limited by your SAS cabling or PCIe bandwidth, you can have literally hundreds of HDDs connected through to two PCIe 2.0/3.0 HBAs and not be performance limited by any of this.

 

It is so highly unlikely that the server is capable of dealing with that kind of data throughput on any software storage or RAID implementation and you certainly won't be able to address it at these speeds across a network without expensive and impractical networking.

 

Edit:

Changed some details, brain lapse. There most certainly are PCIe 3.0 6GB SAS generation cards, I have some lol.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×