Jump to content

Server Form Factors

I just need a little clarification here to make sure I'm understanding this. 

When talking about rack mount servers, I see all the sizes 1U-40U and higher. It's my understanding that this form factor refers to the thinkness or height of the server, but no mention is made of racks that are wider than others. Does this mean all racks have a standardized width but the servers have different heights similar to GPU's that take up varying numbers of slots, but the width (looking at the io panel/bracket) is the same, regardless of one slot or two slots?

Also I've heard of blade servers coming onto the scene, and from what I'm understanding, they don't slide in a rack, but rather slide in an enclosure. But I'm unclear why this would be better than rackmount and whether the enclosure then slides into the rack or sits separately. 

Also as a side note what is the best way to go about finding cheap used racks? Should I just look at ebay/craigslist, or is there a better way?

Insanity is not the absence of sanity, but the willingness to ignore it for a purpose. Chaos is the result of this choice. I relish in both.

Link to comment
Share on other sites

Link to post
Share on other sites

From my experience, server racks are generally a standard width, 

 

the U is how many "Units" in a rack it takes up, eg the height of the unit. 

 

You can however get different depths with a rack, so only shorter servers will fit, most rack mount kits that servers come with will allow you to adjust the length of the mount so it fits the depth of your specific rack.

 

As for blade servers, those are slightly different, usually you will rack mount a blade chassis, and that gives you a central management interface for all the blade servers you slot into it. This can also give you some additional benefits, blade servers can save physical space, they also increase your ability to scale up "slot in a new blade server etc" reduced complexity from a networking perspective etc. there are lots of additional benefits of blade style infrastructure.

 

it all depends on your use case and requirements. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, factorialandha said:

From my experience, server racks are generally a standard width, 

the U is how many "Units" in a rack it takes up, eg the height of the unit. 

You can however get different depths with a rack, so only shorter servers will fit, most rack mount kits that servers come with will allow you to adjust the length of the mount so it fits the depth of your specific rack.

Ok that's what I was thinking, though I didn't even think about depths, i assume you're referring to how far the server slides in, right?

Quote

As for blade servers, those are slightly different, usually you will rack mount a blade chassis, and that gives you a central management interface for all the blade servers you slot into it. This can also give you some additional benefits, blade servers can save physical space, they also increase your ability to scale up "slot in a new blade server etc" reduced complexity from a networking perspective etc. there are lots of additional benefits of blade style infrastructure.

Ok so i need some clarification then. If you take a server and slide it in a chassis that is in a rack, I don't understand how the blade is saving space or reducing complexity. I can see the central interface being useful, but I don't see how the other benefits are provided.

Insanity is not the absence of sanity, but the willingness to ignore it for a purpose. Chaos is the result of this choice. I relish in both.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Jtalk4456 said:

 

Ok so i need some clarification then. If you take a server and slide it in a chassis that is in a rack, I don't understand how the blade is saving space or reducing complexity. I can see the central interface being useful, but I don't see how the other benefits are provided.

usually blade chassis include some different items, eg Power Supplies, a network switch stack, with that in mind, you can have much smaller per server hardware as long as the cooling, storage (in some cases) and networking is in the primary chassis itself.

 

So you can have a 2U Chassis, that can house 4 blades for example

 

where as individual servers, you would require at leave 4U for 4 servers

 

depending on the hardware you go for you generally can save space for the most part. 

 

So for example a  blade chassis will generally have integrate cooling / power supplies, pass through networking or full on network switch stack functionality. Can more often than not be configured with shared storage for the blade servers themselves..

Going off items like that, it generally saves complexity and downtime, for example, if you have broken hardware, you can pretty much just hot swap the blade node. Depending on your configuration and use. You should google the pros and cons and see. They have there use but its all based on use case.

 

Integrated storage and make it good for standalone deployments where you only have an uplink (think Datacenter)

 

Most datacentre set ups only REALLY require an uplink to the internet, if you plan on using a virtualized firewall (within the blade config) you can get away without any network switches or networking devices at all. Just plug it into the blade chassis configure it on that side and you have saved yourself 1U at least, couple that with 2 or more servers per U and you are saving yourself money not having to rent additional rack space. Not to mention its 1 stop shop for interface to configure / manage the whole infrastructure.

 

Again, it all depends on your specific use case.

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, factorialandha said:

usually blade chassis include some different items, eg Power Supplies, a network switch stack, with that in mind, you can have much smaller per server hardware as long as the cooling, storage (in some cases) and networking is in the primary chassis itself.

 

So you can have a 2U Chassis, that can house 4 blades for example

 

where as individual servers, you would require at leave 4U for 4 servers

 

depending on the hardware you go for you generally can save space for the most part. 

 

So for example a  blade chassis will generally have integrate cooling / power supplies, pass through networking or full on network switch stack functionality. Can more often than not be configured with shared storage for the blade servers themselves..

Going off items like that, it generally saves complexity and downtime, for example, if you have broken hardware, you can pretty much just hot swap the blade node. Depending on your configuration and use. You should google the pros and cons and see. They have there use but its all based on use case.

 

Integrated storage and make it good for standalone deployments where you only have an uplink (think Datacenter)

 

Most datacentre set ups only REALLY require an uplink to the internet, if you plan on using a virtualized firewall (within the blade config) you can get away without any network switches or networking devices at all. Just plug it into the blade chassis configure it on that side and you have saved yourself 1U at least, couple that with 2 or more servers per U and you are saving yourself money not having to rent additional rack space. Not to mention its 1 stop shop for interface to configure / manage the whole infrastructure.

 

Again, it all depends on your specific use case.

 

Very interesting, thanks for the clarification! To ask one more question though, it sounds like the main difference physically is the hardware in the blade is smaller to allow for more integrated hardware, which of course costs more money for smaller parts. At the same time though, could you not take smaller parts and fit several servers in 1 U and keep the rackmount for simplicity, allowing rackmount servers to be far more space efficient and the only difference would be the center console and hot swapping? Or even with blades being of a smaller dimension, could we not just make a new rack width and slide them in as rackmounts and have the top U spot be a central console? Am I overthinking this?

Insanity is not the absence of sanity, but the willingness to ignore it for a purpose. Chaos is the result of this choice. I relish in both.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Jtalk4456 said:

Very interesting, thanks for the clarification! To ask one more question though, it sounds like the main difference physically is the hardware in the blade is smaller to allow for more integrated hardware, which of course costs more money for smaller parts. At the same time though, could you not take smaller parts and fit several servers in 1 U and keep the rackmount for simplicity, allowing rackmount servers to be far more space efficient and the only difference would be the center console and hot swapping? Or even with blades being of a smaller dimension, could we not just make a new rack width and slide them in as rackmounts and have the top U spot be a central console? Am I overthinking this?

In theory all of your options are valid, 

 

I had a friend of mine who fit 2 sets of hardware in a 1U rack chassis and it worked fine.

 

As for the other option, yes, that is possible, but you have to remember that blade chassis functionally are far different from physical machines, they have no power supply / cooling, thats all done through the chassis itself, So bringing them OUT of the chassis would be additional cabling and possible additional space, depending on whats "shared" and whats not. 

 

If you just wanted to save space, and were going to build your own servers, you could quite happily fit 2 small form factor machines in a 1u chassis if you wanted to. 

 

As for blade infrastructure, thats much better enclosed as it makes implementation and keeps complexity down. Adding a "management unit" above and then smaller individual server chassis underneath which can hold 2 blade servers increases complexity, removes any kind of "backplane" with cabling and is much less useful when it comes to lets say. Getting smart hands at a data center (quite often not very smart) to "replace" a node.

 

Blade configurations are definitely better for space conscious locations and remote locations, they are also much better for certain workloads aswell im sure. 

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, factorialandha said:

In theory all of your options are valid, 

 

I had a friend of mine who fit 2 sets of hardware in a 1U rack chassis and it worked fine.

 

As for the other option, yes, that is possible, but you have to remember that blade chassis functionally are far different from physical machines, they have no power supply / cooling, thats all done through the chassis itself, So bringing them OUT of the chassis would be additional cabling and possible additional space, depending on whats "shared" and whats not. 

 

If you just wanted to save space, and were going to build your own servers, you could quite happily fit 2 small form factor machines in a 1u chassis if you wanted to. 

 

As for blade infrastructure, thats much better enclosed as it makes implementation and keeps complexity down. Adding a "management unit" above and then smaller individual server chassis underneath which can hold 2 blade servers increases complexity, removes any kind of "backplane" with cabling and is much less useful when it comes to lets say. Getting smart hands at a data center (quite often not very smart) to "replace" a node.

 

Blade configurations are definitely better for space conscious locations and remote locations, they are also much better for certain workloads aswell im sure. 

Fair enough, I see how that could benefit certain workloads. But now much more expensive are we talking for blades vs rack?

Insanity is not the absence of sanity, but the willingness to ignore it for a purpose. Chaos is the result of this choice. I relish in both.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Jtalk4456 said:

I just need a little clarification here to make sure I'm understanding this. 

When talking about rack mount servers, I see all the sizes 1U-40U and higher. It's my understanding that this form factor refers to the thinkness or height of the server, but no mention is made of racks that are wider than others. Does this mean all racks have a standardized width but the servers have different heights similar to GPU's that take up varying numbers of slots, but the width (looking at the io panel/bracket) is the same, regardless of one slot or two slots?

Also I've heard of blade servers coming onto the scene, and from what I'm understanding, they don't slide in a rack, but rather slide in an enclosure. But I'm unclear why this would be better than rackmount and whether the enclosure then slides into the rack or sits separately. 

Also as a side note what is the best way to go about finding cheap used racks? Should I just look at ebay/craigslist, or is there a better way?

1U = 1.75" (in height)

 

A Standard Server Rack is 19" wide. Basically any rackmount server measured in U's, will be 19" wide and fit in a standard rack.

 

There are also half-depth racks - these are usually only used for networking equipment like switches.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Jtalk4456 said:

Fair enough, I see how that could benefit certain workloads. But now much more expensive are we talking for blades vs rack?

Well racks can be cheap depending on how big you want. Equipment to go inside it is another thing. 

 

A blade will generally go inside a rack although there are standalone blade servers that are there own rack. Or that are tower form factor. 

 

Prices can vary and blades can work out cheaper depending on your use cases. But a rack with individual servers is most likely to be cheaper for the majority of work loads I'd say. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Jtalk4456 said:

Fair enough, I see how that could benefit certain workloads. But now much more expensive are we talking for blades vs rack?

Blade servers still mount in a rack. The only function of a blade server is to pack as much compute power in as small of a package as possible. The blade system "shell" is the thing that will contain the power supplies, back plane, and the cooling units. Modules that attach on the rear would be for other I/O like networking devices or other controllers. Each blade is it's own server with it's own CPU, RAM, HBA, fiber channel, network controller, and any other I/O the manufacturer puts in. These blades attach into the front of the enclosure.

 

To give an example of how a blade system packs in more servers in a smaller space, the older HP C7000 you may find on ebay takes up 10U but you can pack 16 blades in there. That is essentially a savings of 6U per blade system. That means in a full height 42U rack you can pack 64 servers! 

 

There are downsides to a blade system. For one what you might give up would be space for integrated server hard drives as many blades only have room for maybe a single 2.5" drive. Another issue might be the power draw... a lot of the blade servers have 4 or more power supplies. As far as I know you MUST supply power to at least two of them...and they may require 20A (with 115V house voltage). It may not be a problem when they're idle but when they're all under load you don't want to pop a breaker!

There's no place like ~

Spoiler

Problems and solutions:

 

FreeNAS

Spoiler

Dell Server 11th gen

Spoiler

 

 

 

 

ESXI

Spoiler

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

This is a "U", it is the downwards measurement - all cabinets are standard width and most are a standard depth too, unless you have a like a small network wall cabinet.

IMG_20180522_112248.jpg

Don't forget to @me / quote me for a reply =]

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for all the help guys!

Insanity is not the absence of sanity, but the willingness to ignore it for a purpose. Chaos is the result of this choice. I relish in both.

Link to comment
Share on other sites

Link to post
Share on other sites

I’m a heavy user of hp blades (c7000) and they definately ave their use cases, but when they came out 10 years ago in a previous-virtualisation world they were definately far more relevant. They were invented at a time where VMs were rare and it was common to have 1 server per server so blades offered a very flexible platform for management. Nowadays the only reason we keep using them is that we already own the kit and it makes more sense to stick with the platform but if we were buying all new from scratch then we’d be looking at either standard boxes like dl360/380 or switching to hyperconverged which is the new cool kid on the block. These are very similar to how blades work in concept where you take a single box and pack it full of compute and storage modules then scale vertically with more boxes and more nodes.

 

the biggest gotcha with racks I’ve had is on the depth. You always seem to get in a situation where you rack is either not deep enough or too deep to fit the rails of some obscure bit of kit you end up buying and it can be a royal PITA dealing with that. Best thing I ever got on some new racks is rails that can be adjusted in-situ even with equipment loaded. It needs some big burley lads to move the buggers but it’s saved me huge outages !

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×