Jump to content

1PB ssd San

You heard correct. I am doing a semi-secret project that includes 1PB of ssd storage. I need to pick out a 1u server that has 2 pcie x16 slots. One for a 40Gb dual port qsfp+ adapter, and one for the secret sauce of the project. I was looking at using a Dell R430 with 64GB ecc ram and an e5-2620v3. Any thoughts? It should be able to handle 80+Gbps. Those are the only details I can share. (and no, this is not for the nsa.)

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

are you sponsored by bill gates or something

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Enderman said:

are you sponsored by bill gates or something

Lol no. I might be able to reveal more details later on in the project.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

1PB!?!??!
Are you sure thats not a typo..

Ryzen 5 3600 stock | 2x16GB C13 3200MHz (AFR) | GTX 760 (Sold the VII)| ASUS Prime X570-P | 6TB WD Gold (128MB Cache, 2017)

Samsung 850 EVO 240 GB 

138 is a good number.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Pick up the phone and get in contact with your local presales for HP, Dell, Lenovo or IBM. This is quite clearly for work and an expensive project - no offense but you'd be stupid to go start picking around with consumer grade parts that don't include onsite same day or next day replacement.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Windspeed36 said:

Pick up the phone and get in contact with your local presales for HP, Dell, Lenovo or IBM. This is quite clearly for work and an expensive project - no offense but you'd be stupid to go start picking around with consumer grade parts that don't include onsite same day or next day replacement.

We plan to produce tons of devices. I just want advice from here to try to remove some of my personal biases.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

Damn, that's gonna be a lot of drives.

Samsung's 15TB SSD is the largest one in the world right now isn't it? if so, you're gonna need about 68 of em.

Specs: CPU - Intel i7 8700K @ 5GHz | GPU - Gigabyte GTX 970 G1 Gaming | Motherboard - ASUS Strix Z370-G WIFI AC | RAM - XPG Gammix DDR4-3000MHz 32GB (2x16GB) | Main Drive - Samsung 850 Evo 500GB M.2 | Other Drives - 7TB/3 Drives | CPU Cooler - Corsair H100i Pro | Case - Fractal Design Define C Mini TG | Power Supply - EVGA G3 850W

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, TheKDub said:

Damn, that's gonna be a lot of drives.

Samsung's 15TB SSD is the largest one in the world right now isn't it? if so, you're gonna need about 68 of em.

The company I work wit actually beat that record years ago. I just need to take care of the processing part of things.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Kyle Manning said:

You heard correct. I am doing a semi-secret project that includes 1PB of ssd storage. I need to pick out a 1u server that has 2 pcie x16 slots. One for a 40Gb dual port qsfp+ adapter, and one for the secret sauce of the project. I was looking at using a Dell R430 with 64GB ecc ram and an e5-2620v3. Any thoughts? It should be able to handle 80+Gbps. Those are the only details I can share. (and no, this is not for the nsa.)

Hope the networking side of things is going to have RDMA support else E5-2620v3 CPU's aren't going to be enough to push that amount of throughput or that many IOPs.

 

The Commvault servers at work are all 2x E5-2667 64GB RAM with two dual port 10Gb NICs (4 ports total). We have 7 of these and push them very hard, and that's with client side dedup turned on. Granted Commvault usage profile is not that close to what you are going to be doing but hopefully useful information

 

A better example would be our Netapp 8060 filers. Each node has dual E5-2658 with 64GB RAM with 4 Nodes per C-Mode cluster and we have 3 of these.

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Windspeed36 said:

Pick up the phone and get in contact with your local presales for HP, Dell, Lenovo or IBM. This is quite clearly for work and an expensive project - no offense but you'd be stupid to go start picking around with consumer grade parts that don't include onsite same day or next day replacement.

Says he was born in 2001 

 

Your fucking 15.......

 

i am at a loss for words

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, FirstArmada said:

Says he was born in 2001 

 

Your fucking 15.......

 

i am at a loss for words

 

 

 

I dont sit around smoking weed...

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Kyle Manning said:

I dont sit around smoking weed...

I see my facade was great i'm actually 14 and was never arrested :)

 

Clearly the dox info you got wasnt very reliable 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, FirstArmada said:

Says he was born in 2001 

 

Your fucking 15.......

 

i am at a loss for words

 

 

 

Dude does anyone really put their age on the internet if they wish to be anonymous, hell I could be a wizard with a cat.

Steve Wozniak - "Never trust a computer you can't throw out a window."                                                                                                                                               Carl Sagan - "If you want to make an apple pie from scratch, you must first create the universe."

 

Spoiler

CPU: Core i5 6600K Cooling: NH-D14 Motherboard: GA-Z170XP-SLI RAM: 8GB Patriot Graphics: Sapphire Nitro R9 380 4G Case: Phanteks Enthoo Pro HDD: 2TB Seagate Barracuda PSU: Threamaltake Smart 750W

My computer runs on MSX, Its very hard to catch.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, ozziestig said:

Dude does anyone really put their age on the internet if they wish to be anonymous, hell I could be a wizard with a cat.

I know the kid i worked with him last year 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ozziestig said:

Dude does anyone really put their age on the internet if they wish to be anonymous, hell I could be a wizard with a cat.

Maybe I am a wizard with a cat... Anyway, this is getting off topic.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

I agree with the prior post that even if this is for R&D, picking out hardware based on a forum is silly. Definitely talk to some presales engineers

 

EDIT: I mean picking out hardware with this type of system requirements.

 

EDIT2: I am very interested in seeing more details about this project as you are allowed to share them.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Kyle Manning said:

You heard correct. I am doing a semi-secret project that includes 1PB of ssd storage. I need to pick out a 1u server that has 2 pcie x16 slots. One for a 40Gb dual port qsfp+ adapter, and one for the secret sauce of the project. I was looking at using a Dell R430 with 64GB ecc ram and an e5-2620v3. Any thoughts? It should be able to handle 80+Gbps. Those are the only details I can share. (and no, this is not for the nsa.)

 

I read that as 1PB of SSD Storage in a 1U case.

As others have said assuming 15TB SSDs, you are not going to fit ~67 of them into a 1U rack mount.

 

Also, you should be looking at SANs, not NAS not DAS, not internal Storage.

HPE 3PAR StoreServ - Have used can confirm very good.

Dell Storage PS6610 Series Arrays Looks like it will also fit the bill.

 

Also keep in mind that if your having 1PB of data, your going to need DR, as there is no way your going to be able to back that system up using traditional means (tape).

Link to comment
Share on other sites

Link to post
Share on other sites

The SAN will be kept busy with calculating checksumms and splitting the data to the huge drive array. Do you have RAID cards for the heavy lifting? Otherwise you need a lot of CPU power. As the project is already super expensive I would go with the E5-2687W V3 for the prototype. Than you can measure the CPU load and get a cheaper one for the mass production.

Mineral oil and 40 kg aluminium heat sinks are a perfect combination: 73 cores and a Titan X, Twenty Thousand Leagues Under the Oil

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Blake said:

 

I read that as 1PB of SSD Storage in a 1U case.

As others have said assuming 15TB SSDs, you are not going to fit ~67 of them into a 1U rack mount.

 

Also, you should be looking at SANs, not NAS not DAS, not internal Storage.

HPE 3PAR StoreServ - Have used can confirm very good.

Dell Storage PS6610 Series Arrays Looks like it will also fit the bill.

 

Also keep in mind that if your having 1PB of data, your going to need DR, as there is no way your going to be able to back that system up using traditional means (tape).

I read the first post as meaning this system is just the entry point (one of several ideally of course) to the SAN, seeing as he has a PCIe 16x card as "secret sauce" in his requirements. But the rest of your points are valid in either case.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Wrong forum, servethehome would be better, but even then it's still not quite right.

Comb it with a brick

Link to comment
Share on other sites

Link to post
Share on other sites

I agree with some of the others who already said to contact HP/Dell/IBM/Cisco (I recommend Cisco) to get a quote for a server that meets your needs. No offense to the forums but for such a high level project I would rather get suggestions and answers from somebody who can be held accountable if the hardware doesn't work out.

-KuJoe

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Blake said:

 

I read that as 1PB of SSD Storage in a 1U case.

As others have said assuming 15TB SSDs, you are not going to fit ~67 of them into a 1U rack mount.

 

Also, you should be looking at SANs, not NAS not DAS, not internal Storage.

HPE 3PAR StoreServ - Have used can confirm very good.

Dell Storage PS6610 Series Arrays Looks like it will also fit the bill.

 

Also keep in mind that if your having 1PB of data, your going to need DR, as there is no way your going to be able to back that system up using traditional means (tape).

Lmao I dont need no raid cards. The company is what is making the ssds. We got everything covered. What I was truely wondering is how much cpu power should be used for zfs and samba sharing.

My native language is C++

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Kyle Manning said:

Lmao I dont need no raid cards. The company is what is making the ssds. We got everything covered. What I was truely wondering is how much cpu power should be used for zfs and samba sharing.

Woah woah woah zfs with 1pb? I'm not sure that's within the bounds of a single CPU system.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Kyle Manning said:

Lmao I dont need no raid cards. The company is what is making the ssds. We got everything covered. What I was truely wondering is how much cpu power should be used for zfs and samba sharing.

See my first post around CPU resources etc. I would very much doubt dual E5-2620v3 CPU's would be enough to handle the load you want to put on them, unless there is a very significant amount of offloading to the NICs or that special sauce PCI-E device you mentioned, but even then SAMBA SMB isn't exactly efficient in CPU usage and zfs with dedup would monster the CPU's.

 

During backup our Netapp filers with a total of 8x E5-2658 in each C-Mode cluster sit around 60%-80% CPU usage with a total throughput of around 20Gbps+. These filers serve iSCSI luns to SQL/Exchange servers, NFS mounts to VMware and SMB shares.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×