Jump to content

Looking for input on a 40GB NAS build.

1045666
3 minutes ago, 1045666 said:

None over the network. It's strictly for file transfers/storage for my business. The OS on my PCs are Windows 10 Pro, and the server will have TrueNas - unless I learn of a reason to use something else. My understanding of protocols is nonexistent, so I don't know how to provide more information on that specific question at the moment. I don't have anything in place of the server right now, so I'd be building everything from the ground up.

 

Where are you copying files to and from? 

 

4 minutes ago, 1045666 said:

Wouldn't that require 1 for every drive? And a motherboard to house adequate PCI slots? It looks like the costs grow exponentially with nvme integration vs ssd. I can spring for SSD from HDD if the hardware is mostly the same anyway.

 

nvme drives are ssds. Thats why id get one of those used servers with lots of u.2 bays for those drives.

 

4 minutes ago, 1045666 said:

Raid reliability. I heard W10 isn't the most stable with raid and I'd like something secure for storing the last 5 years of projects. I know it's not a backup, but can hopefully be safer than W10 on HDDs. But what you suggest is essentially what I'm trying to do, just in a separate box so I can use TrueNAS with a balance between fast transfer and cost. I feel like SSDs on LSI cards would be comfortable and a faster cpu will push that over a bit. You're saying to make use of the SSD array's speed, it'd need a faster cpu?

 

win10 does raid fine if you set it up right.

 

 

Id go hdds here, I don't think ssds will provide a speed difference for you, so its wasted money. It also takes a lot of tweaking to get full advantage of those speeds(like zfs the file system truenas uses isn't made for the best performance).

 

Since you don't seem to know much about protocols, id just get 10gbe and hdds, as your unlikely to see benefits from higher speeds, and its hard to full usage from those speeds.

 

HOw many tb usable do you need?

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Electronics Wizardy said:

HOw many tb usable do you need?

The title says "40GB NAS build"...

OP, if you only need 40GB, get an old IDE hard drive.

Edited by ragnarok0273
tired brain forgot important words

elephants

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Electronics Wizardy said:

Where are you copying files to and from? 

I use my C drive for active projects (3 x 1tb raid 0 nvme), then move it over to a HDD for storage which is the first bottleneck since I'm not already using an ssd there. And I'm constantly sharing files between 2 computers since I use both simultaneously for work and they do different tasks.

 

28 minutes ago, Electronics Wizardy said:

nvme drives are ssds. Thats why id get one of those used servers with lots of u.2 bays for those drives.

Right, I was writing 'ssd' but meant 2.5 ssd sata (870 evo for eg.). There's still the noise factor with server cases, even if it's easier to work with and cheaper than an XL tower. I see the benefits of buying a server with 24 bay U.2 readiness, but I don't have a dedicated machine room for the noise. It seems easier to do a ghetto nas build in a pc tower. It's a compromise I have to deal with where I work.

 

With HDDS, I still don't want to deal with the noise of high quantities and the eventual failure. I'd prefer the speed of smaller ssd sata drives at lower capacities in raid 6. It's also cheaper than larger drives because when buying 2 extra drives for parity, I'm buying 2 smaller drives rather than 2 bigger drives.

 

20TB will last me the rest of the year, then I'll need to expand another 5-10TB for next year. It's just storage, but I want it to be faster than 1 gigabyte per second when I'm offloading and recalling project data. As for protocol knowledge, depending on the scale of the subject, I think I could learn what I need to with enough invested time and dedication.

 

Is there something better than TrueNas for transfer speed between systems?

 

3 minutes ago, ragnarok0273 said:

The title says "40GB NAS build"...

I meant 40gbe nics, sorry. Didn't have the terminology down yet. Learning as I go.

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, 1045666 said:

I use my C drive for active projects (3 x 1tb raid 0 nvme), then move it over to a HDD for storage which is the first bottleneck since I'm not already using an ssd there. And I'm constantly sharing files between 2 computers since I use both simultaneously for work and they do different tasks.

 

Yea id say a hdd nas will be plenty. You can still copy at 1gB/s to a hdd array, and that should be plenty here, and much more than that will be a pain to seutp

 

55 minutes ago, 1045666 said:

20TB will last me the rest of the year, then I'll need to expand another 5-10TB for next year. It's just storage, but I want it to be faster than 1 gigabyte per second when I'm offloading and recalling project data. As for protocol knowledge, depending on the scale of the subject, I think I could learn what I need to with enough invested time and dedication.

Id just go hdds then, then you can get like 60tB which should be plenty for a while, and you would still hit the 1gB/s limit that you want.

 

 

 

Id go hdds here, as I don't think you will need the extra performance, and the gain won't be too much as its a pain to get that high of performance. 

 

Id probably just get something like asrock rack x470 board or lga 1155 xeon board, get ecc ram, get like 8 12tb hdds in a raid z6 and it should be plenty of peformance here, and a reasonble cost.

Link to comment
Share on other sites

Link to post
Share on other sites

Okay, I've humbled my approach and will stick to HDDs. Maybe in the future I'll look into a more awesome nvme based rack server from ebay after changing workspaces, if I feel I even need that after some experience with 1gBps.

 

I tried cutting the expenses on the spec list a little more and started wondering if there are 10gbe requirements within the cpu and other components, and ended up learning about RDMA, SMB, LLDP and some other protocols. Can you let me know if this sounds rational, @Electronics Wizardy?

 

NICS/CPU/RAM:

It looks like the Intel X550-T2 doesn't support RDMA, so the cpu will have to work harder - requiring a somewhat decent cpu. And its Intel DDIO that makes those NICs perform well, by taking advantage of ram, so a high-ish quantity of fast ram is probably how the intel NICS will work their best.

 

The Mellanox cards do have RDMA, which could allow the cpu and ram selection to be more budget friendly? Even though I'm only targeting 1gBps speeds, their 40gbe NICs are more available so I might just grab 3 of them for all computers. Not expecting to saturate them.

 

The issue seems to be TrueNAS doesn't support RDMA or RoCE, and RDMA seems to be a feature under Infiniband only, which also doesn't work with TrueNAS. So using the Mellanox cards in Ethernet mode is probably the same, or worse, than using Intel X550-T2 with adequate cpu/ram? Can the Mellanox cards even get 10gbe speeds with TrueNAS with tunables in ethernet mode? Is it a more reliable route to go Intel NICs if I want the full 10gbe throughput?

 

Protocols/Setup:

SMB Direct seems like it has a faster connection than NFS, so I may start there for the protocol. But I'm open to that if hardware runs better with something else. I'd also be open to another OS that does support RDMA though I'm not experienced with command lines. Windows Server actually looks like it'd be great if it wasn't so expensive.

 

Does RDMA matter for 10gbe speeds? I was thinking of downgrading the motherboard from the Supermicro X11SPI-TF to the X10SRL-F, and downgrading the cpu from Xeon Silver to Xeon E5-2620 V3 ($30 on ebay). I'm not really on a budget, but I'm curious to see how cheap I can be just for fun.

 

And for the array, I learned vdevs are best kept under 10 drives, and adding vdevs start to impact performance. So I'm going to do 10 x 8TB Exos X HDDs.

 

Update: Chelsio cards seem to be recommended more for TrueNAS than both Intel and Mellanox, so I could go with those.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, 1045666 said:

It looks like the Intel X550-T2 doesn't support RDMA, so the cpu will have to work harder - requiring a somewhat decent cpu. And its Intel DDIO that makes those NICs perform well, by taking advantage of ram, so a high-ish quantity of fast ram is probably how the intel NICS will work their best.

With only 10gbe, those features don't really matter and the cpu can do all the work here, even on a not super high end chip

 

8 hours ago, 1045666 said:

Does RDMA matter for 10gbe speeds? I was thinking of downgrading the motherboard from the Supermicro X11SPI-TF to the X10SRL-F, and downgrading the cpu from Xeon Silver to Xeon E5-2620 V3 ($30 on ebay). I'm not really on a budget, but I'm curious to see how cheap I can be just for fun.

 

Id say no. I have a system with 10gbe running debian and e5 2680 v2 chips and it easily fills 10gbe.

 

8 hours ago, 1045666 said:

SMB Direct seems like it has a faster connection than NFS, so I may start there for the protocol. But I'm open to that if hardware runs better with something else

What os are your clients running? If your running windows id go smb, go nfs for linux.

 

SMB direct also isn't supported in normal windows 10 home or pro, so you need server or pro for workstations

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Electronics Wizardy said:

What os are your clients running? If your running windows id go smb, go nfs for linux.

 

SMB direct also isn't supported in normal windows 10 home or pro, so you need server or pro for workstations

Both clients run W10 Pro. A bit confused by your last statement about no support for SMB Direct, you mentioned not supported on home or pro, but supported on server or pro. Are there 2 pro versions, one consumer one server? Either way, I was able to find SMB Direct in my W10 in the Turn Features On/Off area, so I guess it's supported?

 

Other than that I'm ready to finally move forward on this. My last question is with the Chelsio NICS. There are 3 variants:

-CR

-SO-CR

-LP-CR

The LP-CRs have better prices, so I'll probably grab those. Do you know if any are more/less compatible with TrueNAS / SMB?

 

Thanks a lot for input.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, 1045666 said:

Both clients run W10 Pro. A bit confused by your last statement about no support for SMB Direct, you mentioned not supported on home or pro, but supported on server or pro. Are there 2 pro versions, one consumer one server? Either way, I was able to find SMB Direct in my W10 in the Turn Features On/Off area, so I guess it's supported?

 

oops you need pro for workstations I tihnk, but I wouldnt bother with it here, just go 10gbe.

 

5 hours ago, 1045666 said:

Other than that I'm ready to finally move forward on this. My last question is with the Chelsio NICS. There are 3 variants:

-CR

-SO-CR

-LP-CR

What model numbers?

 

Im kina a sucker for intel and mellanox nics, out of the box support in linux/windows and I think freebsd. Since your just going 10gbe, it should really matter, but zfs isn't known for the best storage performance.

Link to comment
Share on other sites

Link to post
Share on other sites

 

On 2/10/2021 at 11:42 AM, Electronics Wizardy said:

What model numbers?

 

I'm looking into the Chelsio:

T580-SO-CR

T580-LP-CR, or

T580-CR

 

I'd prefer the mellanox ones, too. But an article that will no longer load suggested the Chelsio stuff is best supported in FreeNAS, Intel second, and Mellanox is hit or miss. But maybe it boils down to the right hardware and configuration within the OS and protocols.

 

The card itself is 40gbe, but since I'm using HDDs, I'm only expecting 10gbe from them. If I wanted to experiment with throughput later since it'll be there (maybe a second vdev of sata ssds), what's the first component that needs to be stronger? CPU? I just bought a pair of E-2620 V3s with the intention of selling one, but I could grab a dual-socket motherboard if that'll help throughput.

 

What OS would be recommended for the best storage performance?

 

So far, I've just bought fans and CPUs, so everything else is open to change as long as it's within the 2011-3 socket range.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, 1045666 said:

hat OS would be recommended for the best storage performance?

Since your using hdds it really won't matter.

 

And it also depends on os config more than the os its self. Like ZFS isn't know for the best performance, but should be fine for a ssd array. And zfs has nice features.

 

5 hours ago, 1045666 said:

The card itself is 40gbe, but since I'm using HDDs, I'm only expecting 10gbe from them. If I wanted to experiment with throughput later since it'll be there (maybe a second vdev of sata ssds), what's the first component that needs to be stronger? CPU? I just bought a pair of E-2620 V3s with the intention of selling one, but I could grab a dual-socket motherboard if that'll help throughput.

 

Id just get 10gbe for now. Save the money until you get ssds

 

5 hours ago, 1045666 said:

'd prefer the mellanox ones, too. But an article that will no longer load suggested the Chelsio stuff is best supported in FreeNAS, Intel second, and Mellanox is hit or miss. But maybe it boils down to the right hardware and configuration within the OS and protocols.

 

Are you stuck using freenas? Id go linux here as its more flexable storage wise.

Link to comment
Share on other sites

Link to post
Share on other sites

But is there an OS that is better for storage performance, generally speaking? Linux?

 

I'm not stuck on FreeNAS since the system isn't built. And the 40gbe NICs are more available than 10gbe, and prices aren't that different so going 40gbe isn't much more. And it futureproofs my setup when I'm ready to upgrade.

 

I also haven't bought hardrives yet, so I could still go with Sata SSDs to occupy more than 10gbe worth of throughput if I can learn the hardware/os requirements to do that. The only reason I settled on HDDs was cause I wasn't finding enough information on how to do make use of more than 10gbe with sata ssds. I was getting responses and seeing threads about going straight to nvme or sticking to HDDS/10gbe, but nothing concrete about a middle ground.

 

I think I've got an understanding now from this thread on how to get 10gbe, and have started buying the components. Just want to learn what's required to take a step further than HDD/10GBe without u.2 nvme rack servers.

 

If I wanted to get really serious answers, what type of person would I have to hire for a consult?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, 1045666 said:

But is there an OS that is better for storage performance, generally speaking? Linux?

Generally id say linux is better, but it depends on your exact usage.

 

1 hour ago, 1045666 said:

'm not stuck on FreeNAS since the system isn't built. And the 40gbe NICs are more available than 10gbe, and prices aren't that different so going 40gbe isn't much more. And it futureproofs my setup when I'm ready to upgrade.

Id go 10gbe as switches are much easier to get, and thats makes network setup easier. ALso 40gbe is eol, so switches won't be a thing much more in the future it seems.

 

1 hour ago, 1045666 said:

also haven't bought hardrives yet, so I could still go with Sata SSDs to occupy more than 10gbe worth of throughput if I can learn the hardware/os requirements to do that. The only reason I settled on HDDs was cause I wasn't finding enough information on how to do make use of more than 10gbe with sata ssds. I was getting responses and seeing threads about going straight to nvme or sticking to HDDS/10gbe, but nothing concrete about a middle ground.

 

Id just go nvme as pricing is about the same, so no reason not to. Get the u.2 adapter or the m.2 to pcie adatper. But sata drives will work fine.

 

1 hour ago, 1045666 said:

f I wanted to get really serious answers, what type of person would I have to hire for a consult?

Tons of IT consultants out there, that specialiaze in storage. But most want to use premade appliances(like dell emc/hpe), and normally cost in the hundreds a hour. Probably not practical here though.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×