Jump to content

Unraid Server: Maxed out PCI-E lanes on (x470) or adapter card issue with M.2 SSD?

The Problem: I can't get my third m.2 SSD to show connected to my server. I've tried it in slots PCI-E5 and E3 and when my other expansion cards are connected it won't show up. I have my motherboard PCI-E mode set to x4x4x4x4 and the drive doesn't show up in unraid as connected. 

 

My server is running unraid. I have 2 SATA expansion cards, 4 HDDs each. 3 HDDs go directly to the motherboard. My SATA expansion card would not show connected using PCI-E5 but it would in E3. the M.2 adapter the drive would not show whether in E5 or E3. 

 

Ryzen 2700x CPU

16GB 2x8GB DDR4 3200

GTX 1060 6GB GPU

x11 6TB HDDS

x3 2TB M.2 SSD

 

MSI X470 Gaming Pro Carbon Motherboard. 

 

Here is my motherboard layout:

image.png.3a94fdb060adc4b2fac8cd2951b09bd6.png

 

I've got the two M.2 slots filled on the motherboard. I've got the third one in one of these PCI-E adapter cards: https://www.amazon.com/dp/B07JJTVGZM?psc=1&ref=ppx_yo2ov_dt_b_product_details

 

This listing states "no drivers required". Do you think my problem is this adapter card, or is it that I'm maxing out my PCI-E lanes so its not showing the drive?

AJFCJaVnbepkH_vJ2zbLLzc37BAyTr_lM4ZB7pJQ8C_iZ5fVOuPdr4zVUJadxi3nCtjU_mYw-HMF6IGzcxMnKiZzXA5xykxIOLjFkk_iXEKhEtw_QVTB02Jchdg1lsQRa4i0ZN5mW1CiUObE9WGz5NGP9Czzeoc3oeQ0OvOCmvLdJcBo5K4QqQylSrf3OH4-UV2WZQZju84kIr5p-Kp2SpJZ7Zx2SBrY1E906mAcNH9OdOO0ViQcssWnvcsGHPokwrqmcit073rI6sRFSCwg2BgYMpys0abj_yy7KJX7koT_X3dKc7Ej_yjRHA_R-2qUOXSkYwQmsWRa0c0EUOpVBkcVDniDe1vB1Q3BaJsQxJULVXrycc8iXvlGehVPt8xxCUOYl2uwAtiy1CFMn4asy9Xxex8xXBDB0xqZAk6fUuEIOJQn4bJlMILXmxP3m468pJWkJ6zu27N6qsOf3CVtWABrlARkhJi0wpfGXQx7NGgLHaxfEIL2U-msw7vqOc7VTHmAwk3QVIktLNyZ3vmvKFcKhzKGk94qf_Csps996xqg42ggaiBMgi_N3fSTPRHJV_Ffhyq2W0-YlCDqBiexXotH59bgi0toW4mCzvaB-H70FHrbJZVQmtluY6-eQBeKtg7lm3a5ngCYa0eu9rjlIrVwUEL-fIEDMMtNuj0XmosdfGVLfaRCj70OU5zga3Ty8LPmOrUSBmGqyv5-w9QNeIDuc5pjl4icZFSn8zvkJqaH_85wpziOH32Td9T9yK5QZtgE7HvdKU3twNGD2mH63BPWfAavLtIwYuP7E5KttFIu0mARCKuGH7Wln_J13K2MHHswLHPbupPyy-FB_tMZkyp2TcnQOFxzUHsGB7gCOPUfBWVikwkp-R1T4gYmxYsMGm7F_7XsE2Wq6F4sQYRxKOtqayG4qVc=w974-h1218-s-no?authuser=0

Gaming - Ryzen 5800X3D | 64GB 3200mhz  MSI 6900 XT Mini-ITX SFF Build

Home Server (Unraid OS) - Ryzen 2700x | 48GB 3200mhz |  EVGA 1060 6GB | 6TB SSD Cache [3x2TB] 66TB HDD [11x6TB]

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, suchamoneypit said:

The Problem: I can't get my third m.2 SSD to show connected to my server. I've tried it in slots PCI-E5 and E3 and when my other expansion cards are connected it won't show up. I have my motherboard PCI-E mode set to x4x4x4x4 and the drive doesn't show up in unraid as connected. 

 

My server is running unraid. I have 2 SATA expansion cards, 4 HDDs each. 3 HDDs go directly to the motherboard. My SATA expansion card would not show connected using PCI-E5 but it would in E3. the M.2 adapter the drive would not show whether in E5 or E3. 

 

Ryzen 2700x CPU

16GB 2x8GB DDR4 3200

GTX 1060 6GB GPU

x11 6TB HDDS

x3 2TB M.2 SSD

 

MSI X470 Gaming Pro Carbon Motherboard. 

 

Here is my motherboard layout:

image.png.3a94fdb060adc4b2fac8cd2951b09bd6.png

 

I've got the two M.2 slots filled on the motherboard. I've got the third one in one of these PCI-E adapter cards: https://www.amazon.com/dp/B07JJTVGZM?psc=1&ref=ppx_yo2ov_dt_b_product_details

 

This listing states "no drivers required". Do you think my problem is this adapter card, or is it that I'm maxing out my PCI-E lanes so its not showing the drive?

AJFCJaVnbepkH_vJ2zbLLzc37BAyTr_lM4ZB7pJQ8C_iZ5fVOuPdr4zVUJadxi3nCtjU_mYw-HMF6IGzcxMnKiZzXA5xykxIOLjFkk_iXEKhEtw_QVTB02Jchdg1lsQRa4i0ZN5mW1CiUObE9WGz5NGP9Czzeoc3oeQ0OvOCmvLdJcBo5K4QqQylSrf3OH4-UV2WZQZju84kIr5p-Kp2SpJZ7Zx2SBrY1E906mAcNH9OdOO0ViQcssWnvcsGHPokwrqmcit073rI6sRFSCwg2BgYMpys0abj_yy7KJX7koT_X3dKc7Ej_yjRHA_R-2qUOXSkYwQmsWRa0c0EUOpVBkcVDniDe1vB1Q3BaJsQxJULVXrycc8iXvlGehVPt8xxCUOYl2uwAtiy1CFMn4asy9Xxex8xXBDB0xqZAk6fUuEIOJQn4bJlMILXmxP3m468pJWkJ6zu27N6qsOf3CVtWABrlARkhJi0wpfGXQx7NGgLHaxfEIL2U-msw7vqOc7VTHmAwk3QVIktLNyZ3vmvKFcKhzKGk94qf_Csps996xqg42ggaiBMgi_N3fSTPRHJV_Ffhyq2W0-YlCDqBiexXotH59bgi0toW4mCzvaB-H70FHrbJZVQmtluY6-eQBeKtg7lm3a5ngCYa0eu9rjlIrVwUEL-fIEDMMtNuj0XmosdfGVLfaRCj70OU5zga3Ty8LPmOrUSBmGqyv5-w9QNeIDuc5pjl4icZFSn8zvkJqaH_85wpziOH32Td9T9yK5QZtgE7HvdKU3twNGD2mH63BPWfAavLtIwYuP7E5KttFIu0mARCKuGH7Wln_J13K2MHHswLHPbupPyy-FB_tMZkyp2TcnQOFxzUHsGB7gCOPUfBWVikwkp-R1T4gYmxYsMGm7F_7XsE2Wq6F4sQYRxKOtqayG4qVc=w974-h1218-s-no?authuser=0

What expander cards are they? Are both HBA's?

 

If so, maybe it would be worthwhile to get a SAS expander and plug that into the HBA, that way you don't take up additional PCIe lanes.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, LIGISTX said:

What expander cards are they? Are both HBA's?

 

If so, maybe it would be worthwhile to get a SAS expander and plug that into the HBA, that way you don't take up additional PCIe lanes.

I'm using these https://www.amazon.com/gp/product/B00AZ9T3OU/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 . I am not familiar with a SAS expander or an HBA. I'm using a consumer motherboard. 

 

And I'm thinking about it now, what I'm using might actually bottleneck my drives?

Gaming - Ryzen 5800X3D | 64GB 3200mhz  MSI 6900 XT Mini-ITX SFF Build

Home Server (Unraid OS) - Ryzen 2700x | 48GB 3200mhz |  EVGA 1060 6GB | 6TB SSD Cache [3x2TB] 66TB HDD [11x6TB]

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, suchamoneypit said:

I'm using these https://www.amazon.com/gp/product/B00AZ9T3OU/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 . I am not familiar with a SAS expander or an HBA. I'm using a consumer motherboard. 

 

And I'm thinking about it now, what I'm using might actually bottleneck my drives?

Oh, that isn't what those cards looks like.... Dump those garabge.

 

https://www.ebay.com/itm/155421555013?hash=item242fd8b945:g:3n8AAOSwsKdf9TOU&amdata=enc%3AAQAIAAAA4Mcg9nqibna5pxRF%2BBv3%2FDyGIIjoMn%2FVz6aoshuVwghS6DQeVhbg2O6SCwD%2B1KoidOWaEaLpzyKJg51yScKYtnvzQw2YvRaRgJIqU%2F2HDNzozZ9UyCs7rb0L7OtORgcRl0GpVdCPgNJRP74hl6t4kmDI4zpepVdQjMi7xvOuBcO%2Fg7FeD0egcTviZKHZBzEz3IeiV6x0GWhpcF9utf4nhetvH5%2FFU55PHgklYz13DuW0cGN4JzzxOwMQ%2FmjNSYm3v6KsL1pRJm5QaOqAEJ%2BNQTeWLr9Tby7YNDv%2Frh3J8e3b|tkp%3ABFBMhqebnZFi

 

This is a enterprise grade RAID card, flashed to IT mode. IT mode turns it into a HBA (host bus adapter) and will pass the drives through to the OS cleanly (as in, it won't try and do any sort of hardware RAID which you do NOT want for a software solution like ZFS or unraid). Each SAS port can be broken out to x4 SATA.

 

From there, you can then use a SAS expander: https://www.ebay.com/itm/354791972163?epid=22020521630&hash=item529b3fdd43:g:0eUAAOSwHMNkZTdR&amdata=enc%3AAQAIAAAA4HfK7LOi2Gm7NN3dFN1IsOYMs273VDJWe6il9BZSHYpT5TBuQAuA%2FK9aip9aH7OVn4A1OEphXf1m0lDtsI5SoxqXKPWz2evZi14bOpAul3SivOC7bFL4m9kzXtOVMqnkiNo0tqN0cGnfC1WeTRihu1fKVHlFWnM8s9%2Fh55LQFrUFtfxTdrN3BSFccpQNIXT6rfIfMT8B0PGxPy9h7MYzrwKMZ1DZ%2FFLGIj53I7h0XyYsRX%2FC3PbgWmlan7MFf%2FhiJjlM0odgoOvPEHFGB7bRPyjWBnVCJBzmQAZwnln6dfPM|tkp%3ABFBM9tKrnZFi

 

Connect them together with this: https://www.ebay.com/itm/284963439854?hash=item42592504ee:g:fBgAAOSwwB1jHiNO&amdata=enc%3AAQAIAAAA4OLd8I5oZtV%2Bu1y29eYd1657wYk231ASJ5Yxp9Gh9IFlg9WeTZsyuoKNZooCh7IPTezt46A3j7ikU37286j4d67HDjgi2dgYG7JRW0WC5oIR2F9bGAJW%2BpjXLiwYJ8dGZPbrfxWL49FIODVR2e0DA2ei2etH4TMCSK72WD6ieI1tCxrG6AhmbW96cfK3B6eIdMOtPNBQd7FCgl0V2xW4PeQoWBPOOHEoIFxejRZbaw1SNHKwbsM1mJ3a7XZLG%2FbUjRSUWAsKF11jClQsbNmW6Q%2BbkP8IGNmJSvqi7A8CuGnR|tkp%3ABFBMhLWvnZFi

 

And buy a few more SAS to SATA plugs: https://www.ebay.com/itm/133080894610?epid=1402280309&hash=item1efc3d4c92:g:L1gAAOSw90tg3Jwi&amdata=enc%3AAQAIAAAA4KIgJ0S%2BLy0go82Rq%2F7ACgtRBrDUFZsUj8n7UMRuXNUdeKEUcowWaVIxkhWAfZNk1gwY857lLDZgCuJhxUCqwy3OV3etP51Gn1oTlV1T8YRP0IH0GCMjS3YAOZG1g%2FmM456fm3dnY%2BnvTAlS267WDqIpw9OHKhQvcWBPGPMnVuTT931fXnGLsLjhxpWfpQfIu3bDI4FJptpic%2BEVxUR2sppL4jXcA%2BJWYTMRkzeV4lgyL9srMsAoUfOqJ6EOWpN3nOL5o6y7zEbJDyFkyAQ7%2Fp3iCemc2B4bQ%2F5ZJhZ7iAMk|tkp%3ABFBMtsu2nZFi

 

Now you have as many SATA ports as you need, and it only takes up a single PCIe slot. The SAS expander doesn't use any data, it just needs power.

 

This will free up PCIe slots and lanes if you end up needing that solution.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Aargh edit those links to remove the extra parameters , at least from the text

 

Anyway, manual is here : https://download.msi.com/archive/mnu_exe/M7B78v1.1_EURO.zip

 

from manual :

 

* 2x PCIe 3.0 x16 slots (PCI_E1, PCI_E3) - RyzenTM Desktop processors support x16/x0, x8/x8 mode
* 1x PCIe 2.0 x16 slot (PCI_E5, supports x4 mode)*
* 2x PCIe 2.0 x1 slots
* PCI_E5 slot will be unavailable when installing M.2 PCIe SSD in M2_2 slot.

 

sata :

 

* 8x SATA 6Gb/s ports* (from AMD® X470 Chipset)
* 2x M.2 slots (Key M)*
M2_1 slot (from AMD® processor) supports PCIe 3.0 x4
M2_2 slot (from AMD® X470 Chipset) supports PCIe 2.0 x4 and SATA 6Gb/s 2242/ 2260 /2280 storage devices**
* SATA3 port will be unavailable when installing SATA M.2 SSD in M2_2 slot.
** PCI_E5 slot will be unavailable when installing M.2 PCIe SSD in M2_2 slot.

 

THose IO Crest 4 port SATA cards are not worth it.  You can get 8 port cards in IT mode (HBA) for around the same price.

 

Here's an example : https://www.ebay.com/itm/204338124956

 and here's another : https://www.ebay.com/itm/133481123885

 and here's another : https://www.ebay.com/itm/194910024856

 

It will give you 8 SATA ports.

 

You can put the card in PCI_E1 or PCI_E3  as those are pci-e x8 slots (if both are used), or you could put it in PCI_E5 which is a pci-e x4 slot but that will be disabled if you use a nvme SSD in the second M.2 slot.  So you could move the nvme SSD from the M.2 slot into one of the pci-e x1 slots by using an adapter for example  : https://www.ebay.com/itm/254929823389 - yeah, it's only pci-e 2.0 x1, so the maximum speed would be 500 MB/s but then again, do you really need more than 500 MB/s on the second M.2 drive?

 

But anyway, you have the two slots that run at pci-e x8, so you could have in total 3 of these 8 port sata controllers, if you have a Ryzen CPU without integrated graphics. IF you have integrated graphics, you only have 8 pci-e lanes so only PCI_E1 will work.

 

If you have a video card, again you could use a pci-e x16 to pci-e x1 riser cable and put your video card in the pci-e x1 slot and free up the two pci-e x16 slots PCI_E1 and PCI_E3.

For example : https://www.ebay.com/itm/390837988714

 

This way you could have  3  HBA cards each with 8 SATA ports in the pci-e x16 slots (they will run in x8, x8, x4), the 2nd M.2 SSD into a pci-e x1 slot through a pci-e x1 adapter card and the video card in the second pci-e x1 slot through a pci-e x1 - for a server, that's perfectly fine.

 

 

---

 

edit. Oh yeah. so for the actual question ... can't get the 3rd M.2 working. Well, as I explained, the bottom pci-e x16 slot which has 4 pci-e lanes is disabled if you have a pci-e (nvme) M.2 ssd installed in the 2nd M.2 connector, so your options  are to connect it to one of the first two pci-e x16 slots (which become x8 when both are used at same time) or into one of the x1 slots using a pci-e x1 adapter.

 

The x4/x4/x4/x4 thing is bios is maybe about supporting bifurcation, splitting the 16 lanes of the first slot (when the second is not used) into 4 groups of 4 lanes, so that you could use those cards which have 4 m.2 connectors on them.

So I don't think it helps you here at all - if you have both E1 and E3 x16 slots populated, they're already running each in x8 maximum.

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, LIGISTX said:

Oh, that isn't what those cards looks like.... Dump those garabge.

 

https://www.ebay.com/itm/155421555013?hash=item242fd8b945:g:3n8AAOSwsKdf9TOU&amdata=enc%3AAQAIAAAA4Mcg9nqibna5pxRF%2BBv3%2FDyGIIjoMn%2FVz6aoshuVwghS6DQeVhbg2O6SCwD%2B1KoidOWaEaLpzyKJg51yScKYtnvzQw2YvRaRgJIqU%2F2HDNzozZ9UyCs7rb0L7OtORgcRl0GpVdCPgNJRP74hl6t4kmDI4zpepVdQjMi7xvOuBcO%2Fg7FeD0egcTviZKHZBzEz3IeiV6x0GWhpcF9utf4nhetvH5%2FFU55PHgklYz13DuW0cGN4JzzxOwMQ%2FmjNSYm3v6KsL1pRJm5QaOqAEJ%2BNQTeWLr9Tby7YNDv%2Frh3J8e3b|tkp%3ABFBMhqebnZFi

 

This is a enterprise grade RAID card, flashed to IT mode. IT mode turns it into a HBA (host bus adapter) and will pass the drives through to the OS cleanly (as in, it won't try and do any sort of hardware RAID which you do NOT want for a software solution like ZFS or unraid). Each SAS port can be broken out to x4 SATA.

 

From there, you can then use a SAS expander: https://www.ebay.com/itm/354791972163?epid=22020521630&hash=item529b3fdd43:g:0eUAAOSwHMNkZTdR&amdata=enc%3AAQAIAAAA4HfK7LOi2Gm7NN3dFN1IsOYMs273VDJWe6il9BZSHYpT5TBuQAuA%2FK9aip9aH7OVn4A1OEphXf1m0lDtsI5SoxqXKPWz2evZi14bOpAul3SivOC7bFL4m9kzXtOVMqnkiNo0tqN0cGnfC1WeTRihu1fKVHlFWnM8s9%2Fh55LQFrUFtfxTdrN3BSFccpQNIXT6rfIfMT8B0PGxPy9h7MYzrwKMZ1DZ%2FFLGIj53I7h0XyYsRX%2FC3PbgWmlan7MFf%2FhiJjlM0odgoOvPEHFGB7bRPyjWBnVCJBzmQAZwnln6dfPM|tkp%3ABFBM9tKrnZFi

 

Connect them together with this: https://www.ebay.com/itm/284963439854?hash=item42592504ee:g:fBgAAOSwwB1jHiNO&amdata=enc%3AAQAIAAAA4OLd8I5oZtV%2Bu1y29eYd1657wYk231ASJ5Yxp9Gh9IFlg9WeTZsyuoKNZooCh7IPTezt46A3j7ikU37286j4d67HDjgi2dgYG7JRW0WC5oIR2F9bGAJW%2BpjXLiwYJ8dGZPbrfxWL49FIODVR2e0DA2ei2etH4TMCSK72WD6ieI1tCxrG6AhmbW96cfK3B6eIdMOtPNBQd7FCgl0V2xW4PeQoWBPOOHEoIFxejRZbaw1SNHKwbsM1mJ3a7XZLG%2FbUjRSUWAsKF11jClQsbNmW6Q%2BbkP8IGNmJSvqi7A8CuGnR|tkp%3ABFBMhLWvnZFi

 

And buy a few more SAS to SATA plugs: https://www.ebay.com/itm/133080894610?epid=1402280309&hash=item1efc3d4c92:g:L1gAAOSw90tg3Jwi&amdata=enc%3AAQAIAAAA4KIgJ0S%2BLy0go82Rq%2F7ACgtRBrDUFZsUj8n7UMRuXNUdeKEUcowWaVIxkhWAfZNk1gwY857lLDZgCuJhxUCqwy3OV3etP51Gn1oTlV1T8YRP0IH0GCMjS3YAOZG1g%2FmM456fm3dnY%2BnvTAlS267WDqIpw9OHKhQvcWBPGPMnVuTT931fXnGLsLjhxpWfpQfIu3bDI4FJptpic%2BEVxUR2sppL4jXcA%2BJWYTMRkzeV4lgyL9srMsAoUfOqJ6EOWpN3nOL5o6y7zEbJDyFkyAQ7%2Fp3iCemc2B4bQ%2F5ZJhZ7iAMk|tkp%3ABFBMtsu2nZFi

 

Now you have as many SATA ports as you need, and it only takes up a single PCIe slot. The SAS expander doesn't use any data, it just needs power.

 

This will free up PCIe slots and lanes if you end up needing that solution.

thanks for all the info and links. That seems way better. So that first card, I take up one slot, and using those 1 SAS to 4 SATA cables, I could run 8 hard drives off this one PCIE slot right? 

 

I'm a little confused on the SAS expander part. Is it wired up like this? Both cards are just $40. Whether I buy 2 of the first card or 1 of each, this way I end up with the same 4 ports for drives regardless. 

image.thumb.png.3aad9871b4dee6260552be56c72450b7.png

 

At least using just the first card, this would at minimum let me drop down to 1 PCI-E slot in use instead of my current 2 because my motherboard has 6-8 SATA slots itself too. So I can do 8 HDDs on the card and 3 off my motherboard. 

Gaming - Ryzen 5800X3D | 64GB 3200mhz  MSI 6900 XT Mini-ITX SFF Build

Home Server (Unraid OS) - Ryzen 2700x | 48GB 3200mhz |  EVGA 1060 6GB | 6TB SSD Cache [3x2TB] 66TB HDD [11x6TB]

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, LIGISTX said:

Oh, that isn't what those cards looks like.... Dump those garabge.

 

 

10 minutes ago, mariushm said:

 

 

Anyway, manual is here : https://download.msi.com/archive/mnu_exe/M7B78v1.1_EURO.zip

 

f

 

It will give you 8 SATA ports.

 

You can put the card in PCI_E1 or PCI_E3  as those are pci-e x8 slots (if both are used), or you could put it in PCI_E5 which is a pci-e x4 slot but that will be disabled if you use a nvme SSD in the second M.2 slot.  So you could move the nvme SSD from the M.2 slot into one of the pci-e x1 slots by using an adapter for example  : https://www.ebay.com/itm/254929823389 - yeah, it's only pci-e 2.0 x1, so the maximum speed would be 500 MB/s but then again, do you really need more than 500 MB/s on the second M.2 drive?

 

But anyway, you have the two slots that run at pci-e x8, so you could have in total 3 of these 8 port sata controllers, if you have a Ryzen CPU without integrated graphics. IF you have integrated graphics, you only have 8 pci-e lanes so only PCI_E1 will work.

 

If you have a video card, again you could use a pci-e x16 to pci-e x1 riser cable and put your video card in the pci-e x1 slot and free up the two pci-e x16 slots PCI_E1 and PCI_E3.

For example : https://www.ebay.com/itm/390837988714

 

This way you could have  3  HBA cards each with 8 SATA ports in the pci-e x16 slots (they will run in x8, x8, x4), the 2nd M.2 SSD into a pci-e x1 slot through a pci-e x1 adapter card and the video card in the second pci-e x1 slot through a pci-e x1 - for a server, that's perfectly fine.

Thanks, I missed that in the manual, so definitely makes sense why E5 isn't working. As for the speeds, it is important they all run at max speed. All 3 will be together in a shared cache. I can't have 1/3 drives slowing down the entire cache pool, especially since I bought them all together explicitly to be a max speed cache. 

 

With what both of you said, it seems like at least an HBA with 2 SAS ports is definitely something I need. I am ordering one. If I get one, I save a PCI-E slot and probably get better speeds. 

Gaming - Ryzen 5800X3D | 64GB 3200mhz  MSI 6900 XT Mini-ITX SFF Build

Home Server (Unraid OS) - Ryzen 2700x | 48GB 3200mhz |  EVGA 1060 6GB | 6TB SSD Cache [3x2TB] 66TB HDD [11x6TB]

Link to comment
Share on other sites

Link to post
Share on other sites

You may want to consider getting one of those enterprise MLC/TLC SATA drives as cache, instead of nvme drives.

 

For example, Samsung PM863a 960GB sata drive has 1.3 PB of endurance and it's 70$ : https://www.ebay.com/itm/175686938823

 

or for example 1.2 TB P3600 U.2 drive using MLC, with 6.5 PB endurance for the 1.2TB model : https://www.ebay.com/itm/175687000783

 

There's pci-e to U.2 or M.2 to U.2 adapters, ex

pci-e x4 to u.2 (cable not included) https://www.ebay.com/itm/225414136905

m.2 to u.2 (cable not included) https://www.ebay.com/itm/224956090797

 

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, suchamoneypit said:

thanks for all the info and links. That seems way better. So that first card, I take up one slot, and using those 1 SAS to 4 SATA cables, I could run 8 hard drives off this one PCIE slot right? 

 

I'm a little confused on the SAS expander part. Is it wired up like this? Both cards are just $40. Whether I buy 2 of the first card or 1 of each, this way I end up with the same 4 ports for drives regardless. 

image.thumb.png.3aad9871b4dee6260552be56c72450b7.png

 

At least using just the first card, this would at minimum let me drop down to 1 PCI-E slot in use instead of my current 2 because my motherboard has 6-8 SATA slots itself too. So I can do 8 HDDs on the card and 3 off my motherboard. 

You only need to run a single SAS cable between the HBA and expander, but running 2 is also fine (I run 2). That gives you the 4 free SAS ports on the expander, which can each be broken out to 4 SATA each, so 16 SATA total. Or you could only run a single SAS between them, which frees up 2 ports for 6x4 total SATA if you needed. 
 

But yes, you can do 8 off the single card, and 3 off the mobo. That’s a totally valid option as well.

 

What OS are you running for this NAS, and what is it’s use case? Do you actually need cache drives for anything? What speed is your networking? 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, mariushm said:

You may want to consider getting one of those enterprise MLC/TLC SATA drives as cache, instead of nvme drives.

 

For example, Samsung PM863a 960GB sata drive has 1.3 PB of endurance and it's 70$ : https://www.ebay.com/itm/175686938823

 

or for example 1.2 TB P3600 U.2 drive using MLC, with 6.5 PB endurance for the 1.2TB model : https://www.ebay.com/itm/175687000783

 

There's pci-e to U.2 or M.2 to U.2 adapters, ex

pci-e x4 to u.2 (cable not included) https://www.ebay.com/itm/225414136905

m.2 to u.2 (cable not included) https://www.ebay.com/itm/224956090797

 

My goal was a total of 6TB of cache, to match a single HDD. I got 3 Kingston NV2 2TB drives, they seem to has an endurance of 640TBW.  That 6.5PB of endurance is pretty impressive but those are used drives so who knows the lifespan left, and its like quadruple the cost per TB. I don't expect to use the cache heavily at all but want it to be as fast as possible when it is used, which is why I went with triple nvme drives for cache. I got the NV2 drives for $84 a piece new. I'm very unlikely to ever come close to the 640TBW and the 3-6x speed improvement will be nice. 

 

16 minutes ago, LIGISTX said:

You only need to run a single SAS cable between the HBA and expander, but running 2 is also fine (I run 2). That gives you the 4 free SAS ports on the expander, which can each be broken out to 4 SATA each, so 16 SATA total. Or you could only run a single SAS between them, which frees up 2 ports for 6x4 total SATA if you needed. 
 

But yes, you can do 8 off the single card, and 3 off the mobo. That’s a totally valid option as well.

 

What OS are you running for this NAS, and what is it’s use case? Do you actually need cache drives for anything? What speed is your networking? 

ok, that makes sense. thank you. I'll probably stick with a single card as I think with the PCI-E slots saved I might be able to get a 2nd GPU in the system. My Networking is 1 gigabit to everything, but looking at 2.5GB options currently. I'm using unraid, and currently primarily use it as a media sever with occasional game hosting. I want to be able to run VMs well. With a 500GB SSD cache before between VMs and multiple 4k movie downloads my cache would max out, then fail to empty fast enough, causing everything to come to a standstill. I know I'm splurging on certain things.

Gaming - Ryzen 5800X3D | 64GB 3200mhz  MSI 6900 XT Mini-ITX SFF Build

Home Server (Unraid OS) - Ryzen 2700x | 48GB 3200mhz |  EVGA 1060 6GB | 6TB SSD Cache [3x2TB] 66TB HDD [11x6TB]

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, suchamoneypit said:

My goal was a total of 6TB of cache, to match a single HDD. I got 3 Kingston NV2 2TB drives, they seem to has an endurance of 640TBW.  That 6.5PB of endurance is pretty impressive but those are used drives so who knows the lifespan left, and its like quadruple the cost per TB. I don't expect to use the cache heavily at all but want it to be as fast as possible when it is used, which is why I went with triple nvme drives for cache. I got the NV2 drives for $84 a piece new. I'm very unlikely to ever come close to the 640TBW and the 3-6x speed improvement will be nice. 

 

ok, that makes sense. thank you. I'll probably stick with a single card as I think with the PCI-E slots saved I might be able to get a 2nd GPU in the system. My Networking is 1 gigabit to everything, but looking at 2.5GB options currently. I'm using unraid, and currently primarily use it as a media sever with occasional game hosting. I want to be able to run VMs well. With a 500GB SSD cache before between VMs and multiple 4k movie downloads my cache would max out, then fail to empty fast enough, causing everything to come to a standstill. I know I'm splurging on certain things.

I am not as familiar with unraid, but a cache for media work is really pointless, but what is not pointless is actually running VM’s directly on SSD’s. I assume unraid has a way to use SSD’s for installing your VM’s and containers, assuming that’s true, do that. 
 

Random reads and writes destroy harddrive performance, so VM’s living on them sucks. But Plex server duty is really not demanding on harddrives, especially since basically nothing you do ever saturates the drives write or read speed. So putting VM’s and the Plex database itself (which is within the Plex VM or docker container) on an SSD will help, but cache won’t much. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LIGISTX said:

I am not as familiar with unraid, but a cache for media work is really pointless, but what is not pointless is actually running VM’s directly on SSD’s. I assume unraid has a way to use SSD’s for installing your VM’s and containers, assuming that’s true, do that. 
 

Random reads and writes destroy harddrive performance, so VM’s living on them sucks. But Plex server duty is really not demanding on harddrives, especially since basically nothing you do ever saturates the drives write or read speed. So putting VM’s and the Plex database itself (which is within the Plex VM or docker container) on an SSD will help, but cache won’t much. 

I do understand for the media itself its certainly pointless, but yes VMs will be run. Perhaps even as far as a high performance gaming pc VM will be run, so having all that data on high speed SSDs is desired. Also, all data like the plex app itself, and all other dockers like game servers will be saved on the SSDs. I can also lazily do things like set aside a 200GB SSD chunk for a VM or something. For media downloads I'll have it hit the SSD cache first, but then it later transfers to the HDD array. I can send chunks of 400GB-1TB downloads at once, and sometimes it takes several days for this data to make its way over to the HDD array so having a buffer large enough to not have to care about that is nice. If I have something like a large slow torrent or download in general, a lot of space can get tied up for a few days while those finish especially if that space gets pre-allocated. Everything can hit the cache at first, then certain data like media gets moved to HDD arrays and stays there. But my VMs and dockers can all stay on high speed SSD. I could likely comfortably get away with 2TB of SSD without problems but the SSDs were on sale and its something I wanted to splurge on for a while so I went all in and got 3 drives so I could have 6TB of SSD to match my 6TB hard drives. 

 

a short description of unraid is you creates shares which are like folders of data, and you can choose how data is handled per share. You can have it only use the array (hard drives typically), have it only use Cache (typically SSDs, or have it use cache initially, but then transfer to the array (once the data is not in use). So for my media shares, I can have it hit cache then transfer to array, for my VM and server data, have it be cache only. Normally (when I was low on SSD space) I had media set to only use the array as large downloads tended to fill my SSD to capacity and then cause my VMs and servers to slow down.

Gaming - Ryzen 5800X3D | 64GB 3200mhz  MSI 6900 XT Mini-ITX SFF Build

Home Server (Unraid OS) - Ryzen 2700x | 48GB 3200mhz |  EVGA 1060 6GB | 6TB SSD Cache [3x2TB] 66TB HDD [11x6TB]

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, suchamoneypit said:

I do understand for the media itself its certainly pointless, but yes VMs will be run. Perhaps even as far as a high performance gaming pc VM will be run, so having all that data on high speed SSDs is desired. Also, all data like the plex app itself, and all other dockers like game servers will be saved on the SSDs. I can also lazily do things like set aside a 200GB SSD chunk for a VM or something. For media downloads I'll have it hit the SSD cache first, but then it later transfers to the HDD array. I can send chunks of 400GB-1TB downloads at once, and sometimes it takes several days for this data to make its way over to the HDD array so having a buffer large enough to not have to care about that is nice. If I have something like a large slow torrent or download in general, a lot of space can get tied up for a few days while those finish especially if that space gets pre-allocated. Everything can hit the cache at first, then certain data like media gets moved to HDD arrays and stays there. But my VMs and dockers can all stay on high speed SSD. I could likely comfortably get away with 2TB of SSD without problems but the SSDs were on sale and its something I wanted to splurge on for a while so I went all in and got 3 drives so I could have 6TB of SSD to match my 6TB hard drives. 

 

a short description of unraid is you creates shares which are like folders of data, and you can choose how data is handled per share. You can have it only use the array (hard drives typically), have it only use Cache (typically SSDs, or have it use cache initially, but then transfer to the array (once the data is not in use). So for my media shares, I can have it hit cache then transfer to array, for my VM and server data, have it be cache only. Normally (when I was low on SSD space) I had media set to only use the array as large downloads tended to fill my SSD to capacity and then cause my VMs and servers to slow down.

I wouldn’t have any if the media stuff hit the cache at all, it just doesn’t need to and it just takes up useful SSD space.

 

ZFS uses RAM as a cache, sort of, and I have that fully disabled for my entire media collection… harddrives are plenty fast enough for that sort of workout load at the scale we are using them for. Use SSD’s for the VM’s and containers, no reason to even have the media data hit the SSD at all. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×