Jump to content

1PB ssd San

I think you may need to invest in dual E5-2699 to handle that throughput of data, especially if you are using ZFS.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, alex75871 said:

I think you may need to invest in dual E5-2699 to handle that throughput of data, especially if you are using ZFS.

Dunno if I'd jump that high up without testing. Dual E5-2660v3 might be able to handle it depending on workload profile etc.

 

The other interesting thing I'd like to test is if you need higher CPU cycles vs more threads, I'm not exactly a big zfs or SAMBA user myself. If it's a self built filer server for me it's Windows else storage vendor equipment from Netapp, IBM or HP etc.

 

@Kyle Manning You got any way of testing this before purchasing more equipment just to baseline what you have versus performance to give some insight to what would actually be required?

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Kyle Manning said:

Lmao I dont need no raid cards. The company is what is making the ssds. We got everything covered. What I was truely wondering is how much cpu power should be used for zfs and samba sharing.

So... you want a single server with 1TB of RAM, 1GBRAM/TBHDD right? That is very do able. But yeah, when your zpool needs to be scrubbed...... bye bye for what A week? and month?

 

This is why you SAN, all it needs to do is Parity checks on the fly (with an ASIC which will do it fine, hence the RAID card suggestion), and serve packets through iSCSI (or however you want to connect). You'll be limited but disks you can physically insert into the SAN, and normally you'd have 2 (or more) controllers internally mirrored so if a controller dies, it'll happily plop along on the other till you can swap the faulty one out.

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/17/2016 at 9:26 PM, ozziestig said:

Dude does anyone really put their age on the internet if they wish to be anonymous, hell I could be a wizard with a cat.

The thing is, I NEVER SAID NAS. This will go into datacenters, and be a SAN appliance. I suppose I could shed some light on the secret. The ssds use the PCIe bus. 16 lanes to be exact. There is no need for a raid card. Everything has to be done through software, as hardware raid for PCIe is non-existant. I couldn't find any estimator for how much processing power is needed, but I can see that it's going to be over a single e5-1650v3.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

I think I will go with a custom 1u case for the ssd, then use probably a dell r730 as the cpu part. It would have a qsfp28 nic on it too! I suppose this is it for the topic. On a side note, I might be able to rangle some spare 5TB ssds for @LinusTech to test out! Just send me a pm.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Kyle Manning said:

I suppose I could shed some light on the secret. The ssds use the PCIe bus. 16 lanes to be exact.

Using the PCI-E bus for SSD isn't anything new, been around for years. FusionIO started doing this back in 2008-2009. Also NVMe has made this ubiquitous to the point laptops have PCI-E SSD's.

 

I'd be more interested in the size, cost and performance at low queue depths more than the interface it uses. Sure PCI-E is going to be tons faster but only if the disk controller can use it (NVMe).

 

We also have some now unused FusionIO cards laying around at work, too small to use now days but no one can bring them self to throw them out due to the original cost of purchase lol.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×