Jump to content

From a company called ONDA(never heard of it), 32 Sata ports on a single mobo. Onda (Model Number : B250 D32-D3)

744503606_onda(1).jpg.7d28d881e5f4b893f53f254d3489c620.jpg

796717453_onda(2).thumb.jpg.90697356dab1405cc6a3da798d9b81a0.jpg

  1. flibberdipper

    flibberdipper

    Just saw that in r/datahorder and I'm curious as to what the speeds are like when you're slamming them all. If you can full-send each port and still get 3Gbps per port I'd be goddamn impressed.

  2. Tech_Dreamer

    Tech_Dreamer

    it's priced close to $470 retail afaik, so should come with quality component that can handle that load speed.

  3. flibberdipper

    flibberdipper

    @WikiForce No, just combined power and data. Probably meant for some semi-special enclosure where the board functions as the backplane.

  4. flibberdipper

    flibberdipper

    Yep. If you look on the part of the board where you'd normally have audio junk on most boards it has a slew of PCIe power connectors.

  5. Windows7ge

    Windows7ge

    Is that even E-ATX?

     

    My next biggest question is how are you suppose to power the SATA ports with those 6-pins if the rear I/O is going to have them butted up against the back of the chassis. The box will have to be full non-standard.

  6. Windows7ge

    Windows7ge

    Wow. They could have just used 6-pins that point up and eliminated the whole manufacturing cost of that window.

     

    Although it looks almost as if the intent is to remove the HDD power from the enclosure entirely. Perhaps a type of proprietary 1U or 2U PSU?

  7. Windows7ge

    Windows7ge

    I think this would make for a great storage box. Only thing I don't like is how if you wear out one of the connectors they will be virtually impossible to replace.

  8. Windows7ge

    Windows7ge

    I mean, if you're willing to ghetto rig it apparently PCI_ex1 Gen 3.0 has a max theoretical of ~985MB/s (8Gbit) so you could join it with the two 1Gigs to have what would probably be the worst 10Gig experience imaginable. It would however be 10Gig.

  9. mariushm

    mariushm

    Bandwidth wise, it sucks.

    It's using a B250 chipset, which has maximum 12 pci-e lanes.

    It uses at least 7 x 4 port Marvell SATA controllers, for a total of 28 ports. The other 4 ports are coming from chipset.  So if each controller uses 1 pci-e lane, then 4 drives at a time share 970 MB/s of bandwidth. Meh.

    Also, only two So-DIMM sticks means 32 GB max of memory, which would suck for caching writes and all that.

     

    My guess some company out there wants to be a Backblaze competitor and made a big order for a board like this to Onda and now Onda makes a derivative of that as a retail version since they already made the research and BOM and has the parts ordered and all that.

     

  10. Windows7ge

    Windows7ge

    Quote

    share 970 MB/s of bandwidth.

    That's still just a little under 250MB/s per drive. More than enough for high RPM HDDs. Just definitely wouldn't be plugging any SSDs in this thing. For that many Marvell controllers you would definitely be forced to use Software RAID. If each controller is individually programmable then maybe you could do a combination of the two.

  11. mariushm

    mariushm

    You get the full 560 MB/s per port, if the other ports aren't used.

    I don't know... I don't like this system.

    I'd rather have an ITX style board with the SATA controllers closer together and 7-8 special connectors and separate breakout boards each with 4 SATA ports. You could use something like micro/mini HDMI cable for example ... there's enough wires for 12v power and at least one pci-e lane in such cable.

    If one port becomes damaged, you can just replace the small board... and you also get some flexibility, like screwing the small boards on some rubber feet for minimizing vibrations.

    And 6 pci-e connectors? That's enough for 1000 watts of energy ... ridiculous.

  12. Windows7ge

    Windows7ge

    Oh trust me I'd be fully populating it. I still don't like how you basically can't service the ports if they die though. Also those PCI-e power connectors yeah. unless the design is intended with powering them externally then it's a terrible design IMO.

     

    Just give me an AIC with a heap of SFF-8087 or SFF-8643 ports and I'll connect all the drives my own way. With break-out cables if I have to. Much more maintainable that way.

×