Jump to content

Hello everyone!

I have a couple of servers running TrueNas Scale that i wanted to merge in the main one, leaving me one free to tinker with Kubernetes and TalOS. This main server is built in a Fractal Design 7 XL that can hold up to 20 disks in the storage configuration.

The thing is that i built this server during the pandemic, and the hardware choices weren't the best (mainly due to availability in my area, Spain). So i have an AMD 3400G with 32 GB of RAM on a B450 MoBo. I have a total of 6 disks installed, plus an ssd with a USB adapter to boot the OS from.
I was looking into an HBA in order to migrate the 4 diks from the other server, but I realized that my mobo just have an operational x16 PCI slot that is x8 because of the iGPU, and it is now populated with a P200 that i use for transcoding. So, what would be聽 a good path for upgrading?

I am considering the following options:

  • Built the server around other platform with more PCI lanes and slots
  • Upgrade if possible the mobo to one that will allow me adding HBA (i couldn't find one)

I would love to retain as much hardware as possible (RAM for example) and keep TDP low (i am in 65W for CPU right now)

I'd really appreciate your advice in this one! Thanks 馃槃

Link to comment
https://linustechtips.com/topic/1588206-problems-with-home-nas-expandability/
Share on other sites

Link to post
Share on other sites

22 minutes ago, sendel said:

am considering the following options:

  • Built the server around other platform with more PCI lanes and slots
  • Upgrade if possible the mobo to one that will allow me adding HBA (i couldn't find one)

I would love to retain as much hardware as possible (RAM for example) and keep TDP low (i am in 65W for CPU right now)

I'd really appreciate your advice in this one! Thanks 馃槃

you can look into pcie bifurcation cards, although you need to make sure your motherboard supports it

here's an example:


s-l1600.jpg

gaming system: Intel core I9 12900ks / biostar Z690A valkyrie / 4x8gb corsair Vengeance @3333Mhz ram / RX 7900XTX pulse gpu / Thermalright peerless assassin 140 /Coolermaster Qube 500case / Be Quiet Dark Power Pro 12 1500w power supply

laptop: Dell xps 9510, 3.5k OLED, i7 11800h, rtx 3050 ti, 2x16gb DDR4 @ 3200Mhz, 1TB main drive, 2TB add in ssd

Link to post
Share on other sites

Hi! Thanks for the quick responses!

18 minutes ago, ki8aras said:

you can look into pcie bifurcation cards, although you need to make sure your motherboard supports it

here's an example:


s-l1600.jpg

I think this would not be an option, because my PCI x16 is already limited to x8 by de iGPU, and i need x8 for the HBA and x8 for the GPU (the p200)

21 minutes ago, Bersella AI said:

Replacement to the motherboard would be more cost-efficient, since all you need is an additional PCIe x16 slot. Grab a B550 board such as:

  • ASUS TUF B550M-PLUS
  • Gigabyte B550M-DS3H
  • ASRock B550M-Pro4

I've been looking into these mobos and i think the Asrock would be a good fit:

image.png.f0e275221b61d758324048d2177ac306.png

Link to post
Share on other sites

1 minute ago, sendel said:

Well, now that i look closely. Wouldn't i hit the same wall? The second x16 only would work on x4 mode, right? @Bersella AI

I assume that the P100 GPU occupies two slots, so that it would not interfere with the HBA card. The latter would also work in x4 mode since it does not require a lot of bandwidth.

Link to post
Share on other sites

3 minutes ago, sendel said:

The HBA as far as i can tell needs at least x8 lanes, but maybe i'm wrong. This is the one i was looking into:

https://es.aliexpress.com/item/1005005030563910.html

the info that it need 8 lanes i took it from here:
image.png.988b6ee118219d4f9f3b154207825b0a.png

This would not be a problem, since all PCIe cards support connections with fewer lanes. For example, you might have noticed reviews on TechPowerup which explored the impact of various PCIe configurations on cards like RTX 4090 and RX 6600XT, indicating that these cards are workable with fewer PCIe lanes.

Link to post
Share on other sites

37 minutes ago, sendel said:

The HBA as far as i can tell needs at least x8 lanes, but maybe i'm wrong. This is the one i was looking into:

https://es.aliexpress.com/item/1005005030563910.html

the info that it need 8 lanes i took it from here:
image.png.988b6ee118219d4f9f3b154207825b0a.png

That figure is referring to SAS lanes, not PCIe lanes.

SAS is a little bit like a network, where multiple drives can share SAS lanes through a port multiplier (analogous to a network switch). If you connect a bunch of port multipliers through that HBA, they would all share those eight lanes' worth of bandwidth to the controller, which then has a PCIe Gen3 x 8 connection to the rest of your PC. Even if that card is plugged into a PCIe Gen3 x4 slot, that's 31 gigabits per second between the processor and the drives. You're not going to saturate that, especially not if you're only running a regular Gigabit Ethernet network.

I sold my soul for ProSupport.

Link to post
Share on other sites

Question: are you actually using SAS drives or SATA drives? If SATA, then it's pointless going for a SAS 12G card. You could just get a SAS 6G card and have exactly the same capacity and bandwidth, and that'll both be cheaper and only require PCIE 4x.

On top of that...can you boot the server using a SATA SSD instead of NVMe (or do you have a spare M.2 slot)?

(just read the OP more closely)

You can get an adapter that turns your M.2 slot into a PCIE 4x slot (you can get them which are physically 16x if necessary, or just an open-ended 4x). I'd advise getting the ones that use an Oculink interconnect, they're much easier to deal with. I've done this a few times, works a treat.

Link to post
Share on other sites

2 hours ago, Needfuldoer said:

That figure is referring to SAS lanes, not PCIe lanes.

SAS is a little bit like a network, where multiple drives can share SAS lanes through a port multiplier (analogous to a network switch). If you connect a bunch of port multipliers through that HBA, they would all share those eight lanes' worth of bandwidth to the controller, which then has a PCIe Gen3 x 8 connection to the rest of your PC. Even if that card is plugged into a PCIe Gen3 x4 slot, that's 31 gigabits per second between the processor and the drives. You're not going to saturate that, especially not if you're only running a regular Gigabit Ethernet network.

Oh nice to know! I'll take it into account then

1 hour ago, digitalscream said:

Question: are you actually using SAS drives or SATA drives? If SATA, then it's pointless going for a SAS 12G card. You could just get a SAS 6G card and have exactly the same capacity and bandwidth, and that'll both be cheaper and only require PCIE 4x.

On top of that...can you boot the server using a SATA SSD instead of NVMe (or do you have a spare M.2 slot)?

(just read the OP more closely)

You can get an adapter that turns your M.2 slot into a PCIE 4x slot (you can get them which are physically 16x if necessary, or just an open-ended 4x). I'd advise getting the ones that use an Oculink interconnect, they're much easier to deal with. I've done this a few times, works a treat.

I am using sata drives, but the m2 slot is deactivated if I use all of the SATA ports onboard 馃槙

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now