About the 900p/905p:
You could either get the model that comes with m.2 cable.... or use the standard U.2 cable with something like this:
https://www.amazon.com/U-2-M-2-Adapter-Interface-Drive/dp/B073WGN61Y/ref=sr_1_4?keywords=m.2+u.2+adapter&qid=1563804098&s=gateway&sr=8-4
You're assuming too much... even in raid 0... it wouldnt be that simple to hit 8GBps with 40 drives.
and even if you got enough spinning drives to get close to 8GBps .. that would be peak sequential write only or read only speeds.
Any type of mixed work load on those spinning drives would be terrible for you planned usage as swap.
paging space is usually allocated in 4Kb pages.
--
what would you be using?
--
you're best bet would be getting a handful of the m.2 adapters that hold 4 drives.
https://www.asus.com/us/Motherboard-Accessories/HYPER-M-2-X16-CARD/
To use that many of them, you'd be best with a x399 motherboard for as many PCIe lanes as possible.
The intel 660p 2TB drives are normally on sale (at least in the US)
If you want something to use as swap space... then you want high I/O...
8GBps sequential isnt easy with spinning rust.
Remembers Linus's petabyte project?... over 100 hard drives... and sequential read/write was only about 4GBps.
You'll want SSD drives. But it wont be cheap.
I just hope that Linus plans it well and has Wendell (since he'll be in CA soon for ltx) there to do some performance tweaks. That way nobody will default to blaming Linus for doing it wrong.
@LinusTech
first: LMAO
second: the point of this build is IOPs not peak throughput.
Depending on how they/he configures the vdevs... it could easily affect peak performance (iops vs throughput), but either way, 29 sata SSDs could easily saturate 64 Gbps
The jellyfish server they are competing against.. uses one 10GbE connection per editor.
I think the max editors they are shooting for on a non-nvme server is 4.