Jump to content

HPE DL380p Gen8 Build

Today I'm embarking on a build with the first of hopefully future servers I would like to play with starting with the HPE DL380p Gen8 an early 2010's system built around the LGA2011-0 server platform featuring the C602 chipset.

 

P1000694.JPG

 

Getting a quick look around the system starting at the front left to right we have

  • VGA
  • a slim optical drive
  • Eight 2.5" drive bays. Unfortunately this server did not come with any usable sleds.
  • LED indicator panel for debugging
  • USB
  • System information tab
  • Power button
  • USB
  • Front UID which pairs with the rear UID

 

P1000695.thumb.JPG.b6dd23529ee877d4764db6169cebcba5.JPG

 

Right behind that we have a wall of six hot swappable fans.

 

P1000696.thumb.JPG.0a50c6dfa4adccf01037a5a54299aec2.JPG

 

These things are no slouches. Each fan can put out up to 39.6W for a total of 237.6 watts of just fans.

 

P1000697.thumb.JPG.4f35b21d5d98ca68fb8105431e15ff3e.JPG

 

Unreal. Moving farther back gives us a view of the CPU/RAM arrangement.

 

P1000698.thumb.JPG.140878f3f5a1476acda0fc191c225b62.JPG

 

Two sockets, 12 DIMM slots per socket, 3 DIMMs per channel. The plastic guard acts as a air flow guide to direct the air more accurately to where it needs to go.

 

In the back we have two three slot PCI_e risers. One per socket.

 

P1000699.thumb.JPG.1d51f41a171ee83f2a404805caf80fde.JPG

 

We'll take a closer look at the slotted cards on the right in a little while.

 

In the very rear we can see all of our I/O

 

P1000700.thumb.JPG.7ab1480d04d5f066b68c569e3da22097.JPG

 

  • A small allen key for system servicing
  • Quad Gigabit NIC
  • Serial port
  • Our iLO Management Port. This will become very important later
  • VGA
  • Quad USB
  • Rear UID which pairs with the front UID
  • Two mini-redundant 750W PSU's. I will be looking at those closer in a bit.

Right now the CPU's have arrived so I can start by installing those but we are waiting on the RAM so I think I'd rather do the two at once. The mechanisms designed which hold everything together are a marvel in themselves so I can't wait to dig into this farther. :old-grin:

Link to comment
Share on other sites

Link to post
Share on other sites

The memory has arrived.

 

P1000701.thumb.JPG.0efac6d47940670a2ca25c3f8a746daa.JPG

 

8x8GB(64GB) 2Rx4 Micron RDIMM ECC 1600MHz.

CPU's of choice, Intel Xeon E5-2690v1's.

 

I really like HP's design for CPU and memory installation. First you start by removing the plastic air baffle by squeezing the blue spring loaded clips:

 

P1000702.thumb.JPG.ae9380cb09e7583a7e9d245906b7f32e.JPG

 

Exposing the CPU socket is as easy as lifting the blue handle retaining the CPU heatsink. The heatsink itself uses a ZIF design so it just lifts off of it's alignment pegs.

 

P1000703.thumb.JPG.4bfbc42e9ef7a6d02896bc1985493e81.JPG

 

Then like any other computer that uses the LGA design install the CPU:

 

P1000704.thumb.JPG.d936f73545a27ccffd692e2f308d6c9b.JPG

 

Apply your TIM:

 

P1000705.thumb.JPG.c76e23d29154d562e8c6afd6b26c5819.JPG

 

Rest the heatsink on-top aligning it with the alignment pins making sure it's in the right orientation:

 

P1000706.thumb.JPG.727fb0f82df484575457fb873f0e4c4c.JPG

 

Lower the retention arm and push down on the blue handle:

 

P1000707.thumb.JPG.756e522446cbb90a6b6bcad481381b46.JPG

 

Congratulations you just installed a CPU in a HPE DL380p G8 🥳

For the second socket it's the exact same process so I'm not going to go over it.

 

As far as memory placement goes they make it pretty hard to screw up. To make the memory a little easier to access you can remove the fan wall as one assembly using it's blue handle.

 

P1000711.thumb.JPG.08319c96305b93913d44b7005c3ae8f3.JPG

 

With no memory installed it's actually marked on the air flow baffle which slots to populate first. It tells you what the channels are and everything.

 

P1000708.thumb.JPG.0e90a301c88bf7ad094bcd51db1518e7.JPG

 

So if we rest it on top we can see exactly where each module should go.

 

P1000710.thumb.JPG.5fd56c863e9aca020f1bcfc14d34bbf6.JPG

 

As it just so happens HPE idiot proofed the process and actually color coded the first slot of each channel so it's a no-brainer. With eight sticks are we have to do is populate every beige slot.

 

P1000712.thumb.JPG.d0758432c4356927f0275e04335ee75f.JPG

 

Next up I want to look over some of the tech this has in the rear before I turn it on. I also need to install something special so we can test boot.

Link to comment
Share on other sites

Link to post
Share on other sites

So, moving past the CPU sockets and memory I think I've identified what some of the controllers are on the motherboard.

 

One is just outright labeled though so that one's a no-brainer.

 

P1000699.thumb.JPG.bf2712f4da0b7bd4ee305395b3450876.JPG

 

It looks to me like the C602 chipset isn't in charge of much. There's an unpopulated SFF-8087 connector next to it. There's what looks to be USB control behind the left riser card for a SD card but not a whole heck of a lot else.

 

The iLO itself and what I believe is a single RAM chip so probably 512MB or 256MB of RAM allocated to that.

 

Then unfortunately it looks like the P420i is actually integrated into the motherboard itself. I was hoping to see it in a slot or socket. Based on the servers documentation the P420i comes with every model of this server but with varying amounts of RAM cache which would be the card slotted to the right so it itself is up-gradable.

 

P1000713.thumb.JPG.7e0a5810d42c935a63031722eacbf71e.JPG

 

The highest capacity cache available is 2GB and that not what we have here. This is 2nd best which is 1GB. The wires run off to what looks to be a pair of capacitors with act as a battery to preserve the cached data in the event of power failure. The prevents data corruption on the drive array.

 

Taking a closer look at one of the PSUs.

 

P1000717.thumb.JPG.7ad8ea9aa2d5278e16e0bc026525decb.JPG

 

750W. Appears to claim 94% efficiency but I wouldn't doubt if that's only when it's running on 240V under a 50% or higher load.

 

The Quad NIC in the rear is what the x8 expansion card is.

 

P1000714.thumb.JPG.00d83356400743a1777869ce6845cb27.JPG

 

Besides the slot this has proprietary form factor written all over it. Based on the documentation you can have either Quad Gigabit or a 10Gig NIC installed here. Possibly fiberoptic too.

 

Now I don't have any sleds for this server and I don't want to test the server running over USB. Instead I'm going to boot off the network using iPXE which is what this NIC will be for.

 

P1000715.thumb.JPG.e81d13974679d237e53d382877fe8651.JPG

 

The Mellanox ConnectX-3 CX311A.

 

Cool to see that the riser just uses two standard x16 slots.

 

P1000718.thumb.JPG.4609adffa60168d5c46d2493e8a4577e.JPG

 

I assume one is bifurcated to x8x8 to give us three usable slots.

 

Installing the riser is pretty easy starting with twisting the little blue tabs.

 

P1000719.thumb.JPG.9e829d0873034222c8cf52b183300890.JPG

 

Like it says on the riser. Push down, twist, fold the tab over, done.

 

From here we're ready to shut the lid, plug everything in the rear, and start testing. See if the server even POSTs and looking into software like the iLO itself which I may get into later today. 😛

Link to comment
Share on other sites

Link to post
Share on other sites

We have POST!

 

image.png.739aafec0f2b65026e78334829444459.png

 

Both CPU's and all RAM detected.

 

image.thumb.png.8dfba5ceed09193187c340b4599cdc4e.png

 

Now unfortunately the previous owners of this server did me sloppy and didn't reset the iLO. It had a Static IP well in the public range outside the A/B/C/D classes. I have no idea what they were doing so I had to [F8] the thing, figure out what is apparently iLO 4 and setup DHCP.

 

image.png.a13e928457da625902ae182741fb59bd.png

 

I also had to reset the User as the existing user and administrator accounts were messed up. Absolute PITA.

 

Finally connecting to the iLO. Very fancy login page.

 

449153543_Screenshotfrom2022-06-1213-57-41.thumb.png.23d1c2f7c44088bf89ec5b29c1bb8e2f.png

 

After logging in.

 

1337550270_Screenshotfrom2022-06-1213-59-25.thumb.png.a03f67b1f9d2e0c1d6ab5d13611e4088.png

 

Now the iLO is far far too complex for me to go over here but for the interest of curiosity lets look at our CPU's and Memory:

 

1609511200_Screenshotfrom2022-06-1214-00-58.png.d455715a2f444b4dc0ddc5fd35397646.png

 

605571069_Screenshotfrom2022-06-1214-01-19.thumb.png.7bb82a4c6109743c306730ec86c36d99.png

 

Everything reporting exactly how it should. I'm happy. Might use the iLO to do some temperature testing later. Who knows but the level of system control it gives you is amazing. For example Remote Console which is how I'm getting all of these screen shots.

 

Continuing with testing if she'll boot let's get started with FlexBoot.

 

image.png.c56c680e13c953b596b682431c472d45.png

 

The whole process of setting up a server that other servers can boot off of is a process well outside the scope of this build log. If you want more information on that checkout the tutorial I wrote on the process here. It is a VERY involved process. Above iPXE (or FlexBoot) just got an IP from DHCP, discovered the TFTP server, and downloaded the updated version of iPXE which has a custom embedded script.

 

From here the updated version of iPXE loads the system.

 

image.png.3a74da1302c473ee7d489498e30f6054.png

 

And runs the custom embedded script:

 

image.png.55aa83cf0d82970d1278eda9854a67ec.png

 

In this process the script:

  1. Queried DHCP
  2. Set the iSCSI Initiator
  3. Requested to boot from said iSCSI Initiator

From here GRUB takes over which was also custom modified to allow booting as a SAN device.

This then loads Linux which was also custom modified to allow booting as a SAN device.

 

But after it's all said and done. We're in! :old-grin:

 

image.thumb.png.8116e7b28aa0bea9076d6c441ccf843a.png

 

Booting locally is overrated anyways. 😆

 

I feel it's only appropriate to end the build with a traditional neofetch.

 

261436948_Screenshotfrom2022-06-1113-20-10.thumb.png.7d7b9902f8e141baae04039626c7f3de.png

 

Memory looks a little wonky...not sure what's going on there. Should be 65536MiB but I guess something is reserving it. No matter.

 

I wish I could have made this longer but part of corporate level rack mount servers is you're not building it from scratch. Hopefully we will be able to do this again with both Dell's and IBM's close equivalents those being the Dell PowerEdge R720 and IBM X3650 M4 as I'd like to play with those as well. I'll look forward to it and hope to see you all there! 😃

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×