Jump to content

1.5 Petabype NAS on Mining-Mainboard possible?

So to start, I just wanna say that this of course is a silly and inpractical idea, but the point is, that I think that it could be done.

 

I was browsing ebay recently and I fount an AsRock H110 BTC+ mining mainboard with 12 PCI-E x1 Ports (for only 30€!). I had an idea so I loocked up if there were PCI-E x1 to SATA adapters and sure enough there were some on amazon (like this one: Amazon-Link). If you were to buy 12 of them and plug them into the mainboard you would have (with the ports from the board) a total of 100 SATA ports. Each equipped with a IronWolf Pro (or similar) 16tb would make a raw capacity of 1.6 Petabyte. Throw in an old skylake i5 maybe and a cheap GPU for the one PCi-E 16x Slot and you would have a funktional NAS.

 

There would still be massive problems, like how to power 100 drives, but its only a thought experiment anyways.

Now I can't wait for you guys to tell me why this would never work. ?

 

 

 

(Please don't be too hard with my grammar, english is not my main language)

Link to comment
Share on other sites

Link to post
Share on other sites

Work? Yes probably. Give the performance you'd expect? Not really. 

These cards are PCE2 x1, so about 500MB/s / card (that's the speed you can achieve with 2 IronWolfs, so all other 6 will be bottlenecked).

 

All the PCIex1 slots connect to the chipset, so you'll be bottlenecked to 4GB/s to the CPU at most (i.e. same as one PCIe3x4 NVMe drive) not counting any overhead for the parity drives etc, as well as any slowdowns from the array layout and CPU bottleneck.

 

That sounds fast, but accessing 1.6PB at 4GB/s would take almost 4 months... also 100 16TB IronWolfs will cost about $40'000. When you're putting that much in the drives you'd rather put 2-3k more on being able to access them efficiently.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

I can think of how there would be massive I/O bottlenecks assuming the system is stable. Not even sure how you'd RAID that all together. Then the power required to spin them up. During operation the Amps required isn't bad but initial spin up is really demanding with more disks and without Staggered Spin-up most normal PSU's would probably hit over-current protection limits and shutdown.

Link to comment
Share on other sites

Link to post
Share on other sites

The two above me know 100x more than I, and the first thing I thought was....there are only so many PCIe lanes that can be used at a time!

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

You can buy HBA cards for as low as 10$ each ... ex see these sata2 (300MB/s) gigabyte cards: link

Most are 8 port, but there's also 16 port or even 32 port ... here's example of 16 port and example with 32 ports

A lot of controller cards support port multipliers, which take one sata port and create 5 sata ports ... the 5 devices then "share" the original bandwidth of 300 / 600 MB/s (depending on version) ... here's a sata2 multiplier as example, for <15$ : link

 

You can also use pci-e risers and get them working in pci-e x1 slots, naturally you'd get less bandwidth, but they'd work. Though you don't really have to ... nowadays you can get a threadripper board with 7 or 8 slots and if you want to, you could further split the x16 slots into 2 x8 or 4x4 slots (boards support bifurcation)

 

So there are solutions which don't involve installing 10+ controllers because often when you go over 7-8 devices, you start to get into problems with drivers.

 

Some modern drives support suspend (reusing a pin on the 3.3v of sata connector) and some sata controllers support staggered spinup, so starting up 100 drives is not really an issue.

You'd just need multiple power supplies, or a dedicated power supply for 5v ..as a drive would use around 0.7a on average you'd need at least around 75A on 5v while most modern power supplies only deliver 20A max on 5v

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Tristerin said:

The two above me know 100x more than I, and the first thing I thought was....there are only so many PCIe lanes that can be used at a time!

its not actually more lanes than a normal board, the special thing about this board is that just about every lane the platform has available is split out into individual 1x slots.

 

the thing here is that so many drives pose complex issues, beyond just the headache of powering them.

 

also, with this many high-capacity drives, the cost of a server to shove them into really is just relative...

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, manikyath said:

its not actually more lanes than a normal board, the special thing about this board is that just about every lane the platform has available is split out into individual 1x slots.

 

the thing here is that so many drives pose complex issues, beyond just the headache of powering them.

 

also, with this many high-capacity drives, the cost of a server to shove them into really is just relative...

That's what I meant - the CPU, and the chipset only have so many, what he was describing ran out of PCIe lanes very quickly lol (his amazon link is for a breakout board for even more SATA ports)

 

I use similar boards in my mining farm :)

Workstation Laptop: Dell Precision 7540, Xeon E-2276M, 32gb DDR4, Quadro T2000 GPU, 4k display

Wifes Rig: ASRock B550m Riptide, Ryzen 5 5600X, Sapphire Nitro+ RX 6700 XT, 16gb (2x8) 3600mhz V-Color Skywalker RAM, ARESGAME AGS 850w PSU, 1tb WD Black SN750, 500gb Crucial m.2, DIYPC MA01-G case

My Rig: ASRock B450m Pro4, Ryzen 5 3600, ARESGAME River 5 CPU cooler, EVGA RTX 2060 KO, 16gb (2x8) 3600mhz TeamGroup T-Force RAM, ARESGAME AGV750w PSU, 1tb WD Black SN750 NVMe Win 10 boot drive, 3tb Hitachi 7200 RPM HDD, Fractal Design Focus G Mini custom painted.  

NVIDIA GeForce RTX 2060 video card benchmark result - AMD Ryzen 5 3600,ASRock B450M Pro4 (3dmark.com)

Daughter 1 Rig: ASrock B450 Pro4, Ryzen 7 1700 @ 4.2ghz all core 1.4vCore, AMD R9 Fury X w/ Swiftech KOMODO waterblock, Custom Loop 2x240mm + 1x120mm radiators in push/pull 16gb (2x8) Patriot Viper CL14 2666mhz RAM, Corsair HX850 PSU, 250gb Samsun 960 EVO NVMe Win 10 boot drive, 500gb Samsung 840 EVO SSD, 512GB TeamGroup MP30 M.2 SATA III SSD, SuperTalent 512gb SATA III SSD, CoolerMaster HAF XM Case. 

https://www.3dmark.com/3dm/37004594?

Daughter 2 Rig: ASUS B350-PRIME ATX, Ryzen 7 1700, Sapphire Nitro+ R9 Fury Tri-X, 16gb (2x8) 3200mhz V-Color Skywalker, ANTEC Earthwatts 750w PSU, MasterLiquid Lite 120 AIO cooler in Push/Pull config as rear exhaust, 250gb Samsung 850 Evo SSD, Patriot Burst 240gb SSD, Cougar MX330-X Case

 

Link to comment
Share on other sites

Link to post
Share on other sites

Technically you can do this with any board that supports bifurcation using breakouts.

 

I'd just get an enterprise data shelf at that point, more backplane capacity, more redundancy.  After a certain point you need to plan for larger failure domains instead of like 'lol 100 drive RAID6'

PC : 3600 · Crosshair VI WiFi · 2x16GB RGB 3200 · 1080Ti SC2 · 1TB WD SN750 · EVGA 1600G2 · Define C 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×