Jump to content

PCIe lanes calculator

Hello been a while since posting. 

Trying to figure out total pci express lanes for a triple use pc build I'm currently building. 

 

Waiting for release a threadripper 2920x asumming 64pci lanes 12core

 

My setup will have 2x samsung 960pro nvme 512gb m. 2 os drives=8lanes

 

4x wd red 3tb hhd in raid 10= 16 lanes

 

1x ssd samsung evo 512gb for raid cache= 4lanes

 

GTX 1080ti but might change depending on gtx 2080 release specs=16 lanes

 

GTX 1070 for misses=16 lanes

 

58total lanes plus 4 for usb controller, sound and other things does that sound right I'm confused and want to do it right or if on board controller carnt do it I'd opt for a 4x PCIe raid controller and put it thrue the pci express lanes for the raid any help would be much appreciated as I'm not going ahead till I figure it out

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, lynxeffect92 said:

4x wd red 3tb hhd in raid 10= 16 lanes

1. WHY?! What do you need that for?
2. do you have a 12TiB Backup? Because its a Kamikaze Arrays that is done when something went wrong like a Cable breaks or something like that, then all your data is gone!

3. Why do you think that's 16 Lanes? Do you have a PCie RAID Controller with 16 Lanes?? 

Because I don't know they exist. The Ones I know of are x8 at most.

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

Because ive had 2 hhd fail in past and lost our wedding photos and baby photos. I wanted a raid 10 for redundancy if a drive fails I can put a new 1 in and all is well. Idk if it uses 16 lanes that's what I'm trying to figure out could be 4x being sata I just didn't know if it would require that many lanes. 

 

The system will use 2vms

my misses side with 6 cores

10gb ram

a 512gb m. 2 Samsung 960 evo for os and programs

And a gtx 1070 gpu

 

My side will be 6 cores

16gb ram

A 512gb Samsung 960 evo for os

And a gtx 1080ti or new Gen gpu

 

Plus the raid array for both sides 4x wd red nas 3tb hhd with a Samsung 960 ssd for cache for storage of photos movies and misc stuff. 

6gb of ram as unraid utilize the ram for configuration of vms and unraid itself. Also memory is ecc. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes I could build 2 pcs and a extrnal raid and call it a day or b abit uneek and have it all setup in 1 case and system. Cost in total at this stage to achieve it is 6.5k total with monitors. 

So $3k a system plus $500 for raid array isn't that bad if u ask me. 

Link to comment
Share on other sites

Link to post
Share on other sites

Your PCIE lane count is off. you listed 58, actual count from your numbers is 60.

 

The WD reds use a SATA interface, not a PCIE interface. SATA 3 is equal to about 1 PCIE 2.0 lane speed. 600 Mb vs 500Mb respectively. PCIE 3.0 being equivelent to 1 gbp/s. Also irrelevant as the peak and sustained speeds for those drives would not exceed 250 Mb. So what I’m saying is they will not have much impact on the PCIE lane count. If you connect them through a PCIE raid or HBA card then all you’d need is a card that has a 4 PCIE 2.0 or 3.0 interface and you’re good to go. Or alternatively you could connect them through the chipset on the motherboard. Which I believe for Threadripper has 4 PCIE 3.0 connection to the CPU.

 

So if you do it this way:

2x samsung 960pro nvme 512gb m. 2 os drives=8lanes

 

4x wd red 3tb hhd in raid 10= 4 lanes – (some kind of interface card)

 

1x ssd samsung evo 512gb for raid cache= 4lanes

 

GTX 1080ti but might change depending on gtx 2080 release specs=16 lanes

 

GTX 1070 for misses=16 lanes

 

That gives you 48 lanes all direct to the CPU. Now the interface card for the hard drives will probably be a 8x lane card. I believe that’s more of the industry standard. But if that’s the case you still have 8 lanes left that are direct to the CPU, as threadripper had 4 of its 64 lanes dedicated to the chipset. If you wanted more lanes for something else you could free up 8 lanes each for the GPU’s running them in 8X mode instead of 16X.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Master Sonic Wisebeard said:

Snip

Remember anything connected to the 16x/8x PCIe slots, the raid card (I think) and GPU will use PCIe lanes from the CPU. Anything using motherboard connectors, including NVMe drives and PCIe 4x slots will use HSIO lanes from the chipset, realistically he is using around 36-40 lanes from the CPU. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Matsozetex said:

Remember anything connected to the 16x/8x PCIe slots, the raid card (I think) and GPU will use PCIe lanes from the CPU. Anything using motherboard connectors, including NVMe drives and PCIe 4x slots will use HSIO lanes from the chipset, realistically he is using around 36-40 lanes from the CPU. 

Why should they do that?
The CPU has 64 Lanes (probably -4 for the Chipset)

 

There is no reason to not connect it to the CPU and leave the Chipset Lanes alone...

 

Because ive had 2 hhd fail in past and lost our wedding photos and baby photos. I wanted a raid 10 for redundancy if a drive fails I can put a new 1 in and all is well. Idk if it uses 16 lanes that's what I'm trying to figure out could be 4x being sata I just didn't know if it would require that many lanes. 

1st: I think you are making a mistake here and should think about a backup solution first! Because a RAID array is NOT something to replace a BACKUP!
Its something that allows you to make a backup when one Drive fails but you must never see it as anything else than that!

Its like duct Tape and not a real solution for your Problem.


That would be NAS with automatic backups to external storage, that you place in a different place in case your house burns down.

 

2nd.

Due to the Power Saving Mechanisms I wouldn't recommending a file Server and a Workstation in one machine and strictly seperate them!

So build a NAS or SAN or whatever you will call the other machine with the Storage, Harddrives and an Operating system.

 

With the way it is right now, graphics cards and CPUs cause darn transient wich in the long run might impact the lifespan of HDDs negatively. Thus I'd recommend going with two different machines.

 

And you should read up on RAID Arrays!

Because a RAID0 is Bullshit and a RAID 0+1 is a small fix for the Bullshit. And you don't need no external RAID Adapter for that shit. Either OS RAID (recommended) or the Hardware Assisted crap in the Chipset is more than enough for such a thing...

 

If you want to buy a RAID HBA, why don't you use RAID 5 or even RAID 6??

 

With RAID5 you can loose one drive and it still works and the minimum array is 3 Drives and you loose the capacity of one drive.

With RAID6 you can loose two drives.

 

And when a drive fails, do NOT rebuild the RAID, make a backup and replace the drives!

 

Because I've heard many times that other drives died when rebuilding the RAID...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 13/08/2018 at 9:07 PM, Stefan Payne said:

1st: I think you are making a mistake here and should think about a backup solution first! Because a RAID array is NOT something to replace a BACKUP!

Agreed!

 

I got hit with this myself. My Nas was using a raid 5 array and i did not have a backup of the data. When i started having access problems i didn't think anything of it and ignored it because after a while it seemed to resolve itself. Fast forward a couple months and the motherboard died. The long and short of it was I lost about 10TB of data. That hurt.

 

On 13/08/2018 at 9:07 PM, Stefan Payne said:

 

If you want to buy a RAID HBA, why don't you use RAID 5 or even RAID 6??

Yea If you don't have your heart set on RAID 10 then i would sugest using a RAID 5, But if your paranoid now about data loss because you've had failures like I now am, then throw in another drive and make a RAID 6 for the dual disk redundancy. But again, it won't mean alot unless you have a backup

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×