Jump to content

Server Pet Peeves

Hi LTT Community!

I work for a company that is interesting in trying to innovate in the server space and we were looking to see what some of the biggest pet peeves for physical server management is/what quality of life improvements could be made.

 

For example, would having quieter fans, more power efficient power supplies, or more efficient cooling technology be helpful? These are just some ideas we had, but we recognize that there's probably some people who have more server experience than our imaginations, so let us know what innovations you'd like to see out there!

Link to comment
Share on other sites

Link to post
Share on other sites

Include the rails with the server, why do not come included, where else is a rack server going?

 

Trayless hdd/ssd bays

 

Tool less m.2 slots.

 

High quality air baffles I have seen way to many that suck to take in and out.

 

More ocp nic slots so I'm not stuck with what nic I select with the server, and can save a pcie slot.

 

What type of servers are you planning on making? Rack? Tower? Other types?

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, Electronics Wizardy said:

Include the rails with the server, why do not come included, where else is a rack server going?

True. However, *some* cabinets do come with rails.

Don't listen to me, I don't know nearly enough about rails.  Pls listen to leadeater.

 

40 minutes ago, Electronics Wizardy said:

Trayless hdd/ssd bays

Idk, I kinda like the bays. They're helpful and quick to access.

  

40 minutes ago, Electronics Wizardy said:

What type of servers are you planning on making? Rack? Tower? Other types?

 

If I'm making *server* server, then it's gonna be a rack-mount, even if it stays on the floor and the PC sits on top of it.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Cela1 said:

True. However, *some* cabinets do come with rails.

Basically every rack mount server server I have used have custom rails for it, that goes into holes on the side. Every server is a bit different, so there is no standard. And its a 100 bucks extra.

 

1 minute ago, Cela1 said:

dk, I kinda like the bays. They're helpful and quick to access.

  

I still want the bays, just no trays. Just slide the drives in. Ive seen this before, and would save a good amount of time putting hdds in.

 

2 minutes ago, Cela1 said:

If I'm making *server* server, then it's gonna be a rack-mount, even if it stays on the floor and the PC sits on top of it.

Well there are also blade servers, and tower servers are a reasonble market too.

Link to comment
Share on other sites

Link to post
Share on other sites

  1. Anchor points for cable management.
  2. A universal fan guide system
    1. A means to force air across PCI_e devices when internal cables block the ability to mount a fan.
  3. Drive caddies that support some newer mount holes (like on seagate EXOS)
  4. Better support for replacing broken/damaged backplanes. (spares)
  5. Sufficient space between the motherboard and backplane for routing cables (longer chassis basically)
  6. A means of supporting cables in a rack so hot maintenance doesn't accidentally unplug a cable.
  7. Front USB ports that are decent quality. (Lost a keyboard because of that).
  8. I'd like to see more servers that are short depth with top load for HDD's/SSD's.
  9. Rails that actually extend the server fully outside the rack so you can remove the lid.
Link to comment
Share on other sites

Link to post
Share on other sites

Crappy rails, and mounting in to the rails. HPE has two options, the default meh ones and the better Ball Bearing Rail Kit, always buy the better ones, in fact don't offer crap ones.

 

Cable management arms, blocks too much airflow from being too large with too wider surfaces so we don't use them, but that means needing to unplug everything to slide the server out of the rack to service it.

 

Tool-less everything, HPE/Dell etc already do a fair decent job at this but there is the odd thing that still has screws that are not also thumb screws or a clip retention of some kind instead..

 

Really bad IPMI implementations with crap remote console software, Supermicro super guilty of this. HPE/Dell/Lenovo all excellent.

 

Include all cables for every upgrade option possible with the server from the start. It's actually very annoying if you say want to add a GPU later, maybe one not purchased for this server but reused from another or what ever but you don't have the GPU power cables and they are specific to that server model or vendor.

 

21 minutes ago, Electronics Wizardy said:

More ocp nic slots so I'm not stuck with what nic I select with the server, and can save a pcie slot.

Yes agree with this, we always have two NIC cards to redundancy so if both could be OCP that would be great.

 

19 minutes ago, Harrison Mayotte said:

more power efficient power supplies

To be honest server PSUs are already at the top end of efficient but you can also get ones that are not, either as the default and you need to upgrade or they come standard. Almost all HPE PSU's now are 94% efficient with 96% options starting to come now.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Cela1 said:

True. However, *some* cabinets do come with rails.

Cabinet with rails? Rails tend to be vendor specific so you can't user Supermicro rails with an HPE, or a Gigabyte or w/e unless Supermicro or similar company is actually the ODM of the downstream vendor.

 

Universal rails aren't actually rails, they are just sleds/trays that devices sit on and things like UPS's and disk trays use these but you can also use them for servers too, but they aren't "rails".

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Electronics Wizardy said:

What type of servers are you planning on making? Rack? Tower? Other types?

We're looking into rack-mount servers, and potentially making racks themselves.

27 minutes ago, Cela1 said:

As a server enthusiast, quieter fans would be better. However most enthusiast fans are second hand enterprise server fans, which in turn need to be loud to deliver the rated airflow performance.

Would quiet(er) fans that give the same performance be something that would be useful for enthusiast and/or enterprise use cases?

18 minutes ago, Windows7ge said:
  • Anchor points for cable management

Would that be part of a server or a rack itself (or is there potential for both)?

19 minutes ago, Windows7ge said:

I'd like to see more servers that are short depth with top load for HDD's/SSD's.

That seems like a pretty interesting idea. Would that just be like a traditional server (eg. HP ProLiant) but top-load?

4 minutes ago, leadeater said:

Cable management arms, blocks too much airflow from being too large with too wider surfaces so we don't use them, but that means needing to unplug everything to slide the server out of the rack to service it.

What would that look like? Is there anything out there (that you're aware of) that is similar?

 

Thanks for the feedback everyone!

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Harrison Mayotte said:

What would that look like? Is there anything out there (that you're aware of) that is similar?

I can really only show you what not to do and that is this.

 

new-hp-2u-cable-management-arm-for-proli

 

How does the air get through that?!?!

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

I can really only show you what not to do and that is this.

 

new-hp-2u-cable-management-arm-for-proli

 

How does the air get through that?!?!

How would that attach to a server? Does it attach to something on the back of it or does it attach to the rack?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Harrison Mayotte said:

How would that attach to a server? Does it attach to something on the back of it or does it attach to the rack?

It attaches to the server on one side, well actually the inner rail that moves with the server or is attached to the server depending on rail design. HPE 2U servers you don't attach a rail slide on to the server they have anchor points that drop in to slots in the rails

 

cma_002.png

 

137973.png

 

Edit:

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

It attaches to the server on one side, well actually the inner rail that moves with the server or is attached to the server depending on rail design. HPE 2U servers you don't attach a rail slide on to the server they have anchor points that drop in to slots in the rails

Oh dear god that looks horrific.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Cela1 said:

Oh dear god that looks horrific.

They are really nice if they didn't block almost all the air flow, not having to unplug all the cables just to slide out the server is much apricated.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

It attaches to the server on one side, well actually the inner rail that moves with the server or is attached to the server depending on rail design. HPE 2U servers you don't attach a rail slide on to the server they have anchor points that drop in to slots in the rails

 

cma_002.png

 

137973.png

I get what you mean now, and I can clearly see how that can obstruct airflow. 

 

16 minutes ago, leadeater said:

Cable management arms, blocks too much airflow from being too large with too wider surfaces so we don't use them, but that means needing to unplug everything to slide the server out of the rack to service it.

In regards to it being too wide/blocking too much, what would be a good approach to making it so it obstructs less airflow? Assuming the rack is full with servers, what space is there to work with?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Harrison Mayotte said:

Would that be part of a server or a rack itself (or is there potential for both)?

My intention was inside the chassis but since you brought it up. My own server rack has plastic anchors you screw to the side for cable management but there isn't enough anchor points. Cables that are too short to reach the next archer up but too long to sit on the next anchor down. Just having the holes there so I can chose where to put them would be nice. I'd say every 4U or so have an anchor hole.

 

10 minutes ago, Harrison Mayotte said:

That seems like a pretty interesting idea. Would that just be like a traditional server (eg. HP ProLiant) but top-load?

More of an open standard form factor. 4U, Let it hold up to an ATX or E-ATX motherboard but then line the front with 14 or 15 top load bays. Cables could be SFF-8087 or SFF-8643 breakout to SAS/SATA ports that'd sit in the bottom then power could be routed similarly over to the PSU. I'd use Molex here personally. It has more meat to carry more current as oppose to SATA PWR.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Harrison Mayotte said:

I get what you mean now, and I can clearly see how that can obstruct airflow. 

 

In regards to it being too wide/blocking too much, what would be a good approach to making it so it obstructs less airflow? Assuming the rack is full with servers, what space is there to work with?

Just way less surface area, the flat steel pressed plate with only a few cut outs just isn't good.

 

This for example is going to block far less, it's a Dell one.

0013277_dell-1u-rack-server-cable-manage

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Windows7ge said:

My intention was inside the chassis but since you brought it up. My own server rack has plastic anchors you screw to the side for cable management but there isn't enough anchor points. Cables that are too short to reach the next archer up but too long to sit on the next anchor down. Just having the holes there so I can chose where to put them would be nice. I'd say every 4U or so have an anchor hole.

 

I gotcha. Any chance you could get some images for how those anchor points work? That seems like something that could be done for sure.

5 minutes ago, Windows7ge said:

More of an open standard form factor. 4U, Let it hold up to an ATX or E-ATX motherboard but then line the front with 14 or 15 top load bays. Cables could be SFF-8087 or SFF-8643 breakout to SAS/SATA ports that'd sit in the bottom then power could be routed similarly over to the PSU. I'd use Molex here personally. It has more meat to carry more current as oppose to SATA PWR.

This seems like a good idea and I think there is absolutely room in the market for solutions like this. When I'm imagining this, I'm assuming 14 2.5in SAS drive slots? Does that sound like something that would make sense for this use case or would it make sense for something else to be used there?

5 minutes ago, leadeater said:

This for example is going to block far less, it's a Dell one.

When you say "far less," I'm assuming that you mean there is still room for improvement and less air-blocking?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Harrison Mayotte said:

When you say "far less," I'm assuming that you mean there is still room for improvement and less air-blocking?

Potentially, there are other designs that have even less material in the airflow path, it just becomes a question of rigidity and strength. Not enough and it'll be too weak and break, but the more open the better. The goal for something like this should be the least amount of obstruction while achieving being able to secure the cables in a way they can extend and retract cleanly in the rack. It could be an entirely different design to these, not sure how though.

 

Of all the things cabling is the biggest pain point, even just thinking about where best to place ports that cables get plugged in to to ease cable management would help. It's not that great when the minimum required cables end up being spread across the entire back of the server rather the power on one side and network and IPMI/iLO/iDRAC on the other.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Harrison Mayotte said:

I gotcha. Any chance you could get some images for how those anchor points work? That seems like something that could be done for sure.

I'm not joking when I say my phone camera broke today and I'm not exactly a photographer with spare cameras. Tomorrow I can get you something but picture quality will leave something to be desired.

 

8 minutes ago, Harrison Mayotte said:

This seems like a good idea and I think there is absolutely room in the market for solutions like this. When I'm imagining this, I'm assuming 14 2.5in SAS drive slots? Does that sound like something that would make sense for this use case or would it make sense for something else to be used there?

I forgot you probably have the power to spin custom PCB's. You could eliminate the rat nest my suggestion would cause by just manufacturing a backplane that fits the drives. The backplane could accept four SSF-8643 or SFF-8087 connectors and people could route those to their HBA, SAS Expander, or RAID Controller. An LED indicator array on the front of the chassis that showed the HDD activity for each would also very nice.

 

From here I'd probably just place two PCI_e 8 pin connectors on this PCB to power all the drives. +5V could be generated off the incoming +12V rail for the drives and LED indicator circuitry.

 

With a standard 19" wide rack it should be possible to fit fifteen 3.5" in a single column. A spring loaded baffle of sorts that would allow the insertion of 3.5" or 2.5" drive would be very nice.

 

Three 120mm or if they'll fit, three 140mm front intake fan mount points would be perfect.

 

I guess all I'm doing at this point is sharing my dream archival storage server. 😅

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Windows7ge said:

I guess all I'm doing at this point is sharing my dream archival storage server. 😅

I'm sure you're not the only one with a dream like that.

5 minutes ago, Windows7ge said:

I forgot you probably have the power to spin custom PCB's

Unfortunately we don't at the current time but I'll check in with my company's BOD members and investors and see if we have any contacts who are in the custom-PCB arena that could help us bring this idea to life.

 

We'll probably do a Kickstarter campaign at some point to crowdfund the development of the project (or projects if we look into multiple of the suggestions here).

 

We could probably go for a modern approach with the server and try to do multiple innovations with the archival server (eg. testing out different fans that could be quieter, rails that go fully outside the rack, drive caddies that support newer mount holes, front USB ports that aren't rubbish, etc.) and add more as time goes on, tune it to make it better, etc.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Harrison Mayotte said:

front USB ports that aren't rubbish

Would any other front ports be good for this as well? Front VGA maybe?

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, leadeater said:

haha yep :old-smile:

What would yours be?

 

43 minutes ago, Harrison Mayotte said:

Would any other front ports be good for this as well? Front VGA maybe?

Front VGA would be nice.

 

A completely different kind of server I would love to see is one where the back of the motherboard is on the front of the rack. PSU could stick out the back as normal.

 

Reason for this would be custom pfSense or VyOS firewall/routers. Being able to route the cables for these network appliances fully from the front of the rack would be wonderful. I've seen a couple chassis like this but not many.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Windows7ge said:

What would yours be?

Free 🤣

 

Most of what I want already exists, just often really expensive so these becoming cheaper would be nice. Cheap server chassis do exist but those ones are almost always crap and even what I would call mid range options like Supermicro chassis aren't all that good either and cost far too much. Not every Supermicro chassis is that "bad" it's more the ones that take standard motherboards.

 

Would be nice to have options for chassis not made from the cheapest and thinnest metals possible that were low cost and primarily designed for proper server usage not DIY gamer PC in a rackmount chassis. Hotswap/simple swap proper backplanes with SFF connectors, expander backplane options, PCIe riser options for vertical and horizontal add-in card placement. Dual PSU and standard single PSU mounting holes. Included custom cabling for usage with modular PSU when using single PSU. Airflow baffles/guides.

 

The problem is the demand for something like this is very low so the cost is very high due to that. Some really good combo base configuration options would maybe allow it to be more of a thing, with partner agreements between companies to make it happen. Somewhere were I can go to buy an actually decent server chassis and pick PSU option, one of a few motherboard options and then ship that to the customer. The ultimate nicety would be if one of those partner companies was a computer e-waste/disposal company so you could choose from decommissioned server equipment options like used Xeons and server ram, SAS HDDs, RAID/HBA cards.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×