Jump to content

Holiday Hypervisor Server Upgrade

Over this holiday season my primary hypervisor server is going to be going through some changes that I've been planning for a while. It's going to be an upgrade & downgrade at the same time.

 

P1000257.thumb.JPG.79b3dfcc4caa23a6ca3db98535410317.JPG

 

Details about the CPUs & Coolers

Spoiler

A pair of Intel Xeon Gold 6152's. Socket LGA3647 with a LGA1156 CPU for scale.

 

P1000254.JPG

  • 22 Cores, 44 Threads
  • 3.7GHz boost, 2.1GHz base
  • 30.25MB L3 cache
  • Max RAM Capacity/Speed 768GB/2666MHz
  • Hex Channel memory support.
  • 48 PCI_e lanes.

 

The Noctua NH-D9 DX-3647 4U

 

nh_d9_dx_1_1.thumb.jpg.76405621dede6dbe5bab3a3a34a7a82a.jpg

 

  • Built for LGA3647 specifically
  • Dual tower
  • Dual 92mm fans (will be swapping them out for Delta's)
  • 4U compatible

 

Details about the Motherboard

Spoiler

The ASRock Rack EP2C621D12 WS

 

13-140-022-V01.jpg.ce6989be07e86a2fe8a2d4d99a2f76f5.jpg

  • Dual LGA3647
  • 12 DIMM slots, Hex-Channel, supports up to 768GB @ 2666MHz
  • Seven PCI_e 3.0 x16 slots
  • Two M.2 NVMe slots
  • 14 SATA3.0 ports
  • Quad Gigabit LAN + IPMI

 

Details about the RAM

Spoiler

NEMIX 16x16GB DDR4 1Rx4 RDIMM ECC @ 2666MHz

 

P1000247.JPG

 

Details about the NVMe Storage

Spoiler

2x XPG SX6000 Lite

 

P1000205.JPG

 

Not the most appropriate for this system but they're what I have on hand. In the future though I will keep an eye out for upgrade candidates. For now they should get the job done.

 

P1000208.JPG

 

 

More to come later! The most exciting thing being I don't actually know if everything is compatible with one another. 😅 

I guess we'll just have to wait and find out! :old-grin:

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, RollinLower said:

16 DIMMs in a 12 DIMM motherboard? 🤔

Yeah, Newegg didn't sell 12 DIMM or 6 DIMM kits of this type so I had to get four extra.

 

It's fine. I have other potential build-log plans I can use them for.

 

7 hours ago, RollinLower said:

excited to see this thing come to life though. Seems like quite a beast!

I'll let myself get excited when I see it POST. If it POST's. For some reason the higher up you go (like  desktop -> enthusiast -> workstation -> server) the worse and worse hardware/firmware compatibility gets.

Link to comment
Share on other sites

Link to post
Share on other sites

Even though this isn't the first time I've worked with SSI EEB motherboards I can't get over how gigantic they are:

 

P1000259.thumb.JPG.9760010d4c0d1d482108e7fde7b6bd16.JPG

 

Getting started here installing the M.2's was quick and easy enough:

 

P1000260.thumb.JPG.e0b4d6e38a5302bdc1bd824719f87c94.JPG

 

Now according to the motherboard manual. One of these slots goes through CPU2. The other goes though the C621 PCH. The good news is the RAID1 I plan to configure here really won't care what the drives are connected to.

 

Additionally although I do have future plans to utilize all eight SATAIII ports (to the right) with enterprise grade SSD's I expect the chipset has no less than a PCI_e 3.0 x8 connection to the CPU so we're all set on bandwidth.

 

Got the RAM installed. Something about it though seems...off...

 

P1000273.thumb.JPG.1beae320d507b438c76906870103b48f.JPG

 

On both sides of the CPU sockets the RAM all has the same orientation. Even on LGA2011-v3 the RAM always faced the opposite direction after you crossed the socket but that's no the case here. Weird...but regardless 192GB of DDR4 RDIMM ECC 2666MHz. The highest speed both the motherboard & CPU's support.

 

Taking the coolers out of their boxes they're beautiful specimens:

 

P1000265.thumb.JPG.06ddb124bd23582afb0190b4203ef973.JPG

 

They even come with pre-applied Noctua NT-H1.

 

P1000266.thumb.JPG.5b72faff6a8f498c2cec870a7d875b74.JPG

 

But guess what. We're not using it.

 

813926376_download(3).jpeg.c9d6eb502131d456f9285937a02f0d16.jpeg

 

P1000267.thumb.JPG.b2362e4f207dedae73d206bce84e9e0a.JPG

 

Instead we're giving this stuff a whirl:

 

P1000269.thumb.JPG.0136a99623c55e12a047be79afd4ccce.JPG

 

This is a sheet of graphite. It has incredible thermal transfer properties and is effectively infinitely reusable at the downside of it's about as fragile as a single sheet of news paper. Maybe even wet news paper. Also it's highly electrically conductive and if it happens to flake and any finds it's way under the CPU. Say goodbye to your hardware. Fantastic otherwise. We'll see how it handles these CPUs.

 

Let's take a look at some of this cooler mounting hardware aaaannnd...

 

P1000270.thumb.JPG.83641ec1162454a9f829cdc82da61f90.JPG

 

...this is going to take me a minute.

 

Apparently LGA3647 does everything 100% backwards. As in you don't install the CPU in the socket. You install it...onto the CPU cooler? Wait what!?

 

P1000272.thumb.JPG.e1b5d321087179ed3594f07550ed980a.JPG

 

Well installing those was rather nerve-wrecking but they're in and I think...I did it correctly...

 

  P1000276.thumb.JPG.5ad9a15578438f78946a0d9c6077216b.JPG

 

Words cannot describe how tight of a fit that is. Basically 100% of the hot air from CPU2 is going strait into CPU1. Hopefully we'll have the airflow to compensate for it.

 

I think that's going to be it for tonight. I have to steal a PSU from another station in order to test the system and that's going to take a while so it's a tomorrow project. We will test for POST though and run it through some paces. Do some thermal testing and generally test the stability and functionality of the system. See you all then!

Link to comment
Share on other sites

Link to post
Share on other sites

that RAM is super wacky. Never seen anything like it, even on LGA3647. We have several Supermicro boards at work that have the normal RAM orientation.

What benefit could this have? or is it just because of topology constraints or something?

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, RollinLower said:

that RAM is super wacky. Never seen anything like it, even on LGA3647. We have several Supermicro boards at work that have the normal RAM orientation.

What benefit could this have? or is it just because of topology constraints or something?

 

I have no idea. The silk-screen physically showing the RAM channels indicate all 6 slots go to the socket the CPU is next to. So the board isn't doing some weird criss cross. I can only imagine how the traces inside the PCB look just so they could pull this trick off.

 

I'd thank ASRock Rack for the feature if it wasn't already heavily ingrained in my memory to flip the DIMMs as every prior motherboard I've ever worked with requires.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Windows7ge said:

Even though this isn't the first time I've worked with SSI EEB motherboards I can't get over how gigantic they are:

 

P1000259.thumb.JPG.9760010d4c0d1d482108e7fde7b6bd16.JPG

 

Getting started here installing the M.2's was quick and easy enough:

 

P1000260.thumb.JPG.e0b4d6e38a5302bdc1bd824719f87c94.JPG

 

Now according to the motherboard manual. One of these slots goes through CPU2. The other goes though the C621 PCH. The good news is the RAID1 I plan to configure here really won't care what the drives are connected to.

 

Additionally although I do have future plans to utilize all eight SATAIII ports (to the right) with enterprise grade SSD's I expect the chipset has no less than a PCI_e 3.0 x8 connection to the CPU so we're all set on bandwidth.

 

Got the RAM installed. Something about it though seems...off...

 

P1000273.thumb.JPG.1beae320d507b438c76906870103b48f.JPG

 

On both sides of the CPU sockets the RAM all has the same orientation. Even on LGA2011-v3 the RAM always faced the opposite direction after you crossed the socket but that's no the case here. Weird...but regardless 192GB of DDR4 RDIMM ECC 2666MHz. The highest speed both the motherboard & CPU's support.

 

Taking the coolers out of their boxes they're beautiful specimens:

 

P1000265.thumb.JPG.06ddb124bd23582afb0190b4203ef973.JPG

 

They even come with pre-applied Noctua NT-H1.

 

P1000266.thumb.JPG.5b72faff6a8f498c2cec870a7d875b74.JPG

 

But guess what. We're not using it.

 

813926376_download(3).jpeg.c9d6eb502131d456f9285937a02f0d16.jpeg

 

P1000267.thumb.JPG.b2362e4f207dedae73d206bce84e9e0a.JPG

 

Instead we're giving this stuff a whirl:

 

P1000269.thumb.JPG.0136a99623c55e12a047be79afd4ccce.JPG

 

This is a sheet of graphite. It has incredible thermal transfer properties and is effectively infinitely reusable at the downside of it's about as fragile as a single sheet of news paper. Maybe even wet news paper. Also it's highly electrically conductive and if it happens to flake and any finds it's way under the CPU. Say goodbye to your hardware. Fantastic otherwise. We'll see how it handles these CPUs.

 

Let's take a look at some of this cooler mounting hardware aaaannnd...

 

P1000270.thumb.JPG.83641ec1162454a9f829cdc82da61f90.JPG

 

...this is going to take me a minute.

 

Apparently LGA3647 does everything 100% backwards. As in you don't install the CPU in the socket. You install it...onto the CPU cooler? Wait what!?

 

P1000272.thumb.JPG.e1b5d321087179ed3594f07550ed980a.JPG

 

Well installing those was rather nerve-wrecking but they're in and I think...I did it correctly...

 

  P1000276.thumb.JPG.5ad9a15578438f78946a0d9c6077216b.JPG

 

Words cannot describe how tight of a fit that is. Basically 100% of the hot air from CPU2 is going strait into CPU1. Hopefully we'll have the airflow to compensate for it.

 

I think that's going to be it for tonight. I have to steal a PSU from another station in order to test the system and that's going to take a while so it's a tomorrow project. We will test for POST though and run it through some paces. Do some thermal testing and generally test the stability and functionality of the system. See you all then!

Those fans remind me of human centipede. This is homelab stuff presumably? You would never do that in a production server.

Intel i9-13900K - Gigabyte Aorus Z790 Elite DDR4 - Corsair Vengeance LPX 128GB DDR4 3200 C16 - Gigabyte Aorus Master RTX 4090 24GB - Corsair 4000D Airflow - 2x Samsung 980 Pro 1TB  - Corsair AX1600i 80 PLUS Titanium 1600W - Aorus FI27Q - Noctua NH-D15 running 3 fans (CPU) - 6 x NF-A12x25 (3 intake, 3 exhaust) - Aorus K1 - Aorus M5 - Aorus AMP500 - Aorus H5 - Corsair TC70 - Win 11 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, A1200 said:

Those fans remind me of human centipede. This is homelab stuff presumably? You would never do that in a production server.

This is a home-lab yes.

 

And of course not. In a production environment you'd have usually a 1/2U compliant heatsink/chassis and baffles to better channel the airflow in the direction you want it to go. Unfortunately even in a production environment if this motherboard were being used due to how in line the two sockets are I don't think they'd get away with getting any fresh air to CPU1 everything would have to be channeled through CPU2's heatsink. Their only option would be to not use this motherboard and go with a Dell/HPE/IBM-Lenovo server w/ proprietary motherboard or use a standard one with a CPU arrangement like this:

 

EP2C622D16FM-1(L).jpg.4f36497f0128b134c5f873502f633237.jpg

 

And unfortunately to contradict you although they will use different heatsinks in production environments blade servers among other similar form factor systems will channel all of the hot air from the first CPU to the second on the board. They just compensate for it by using an over-abundance of high air flow. We want to keep the dB as low as possible though so the bigger a heatsink I can reasonably fit in a 4U the better.

 

We will be replacing the fans with some higher RPM ones though so I have the option to go over 1500RPM should I ever have the occasion arise.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Windows7ge said:

This is a home-lab yes.

 

And of course not. In a production environment you'd have usually a 1/2U compliant heatsink/chassis and baffles to better channel the airflow in the direction you want it to go. Unfortunately even in a production environment if this motherboard were being used due to how in line the two sockets are I don't think they'd get away with getting any fresh air to CPU1 everything would have to be channeled through CPU2's heatsink. Their only option would be to not use this motherboard and go with a Dell/HPE/IBM-Lenovo server w/ proprietary motherboard or use a standard one with a CPU arrangement like this:

 

EP2C622D16FM-1(L).jpg.4f36497f0128b134c5f873502f633237.jpg

 

And unfortunately to contradict you although they will use different heatsinks in production environments blade servers among other similar form factor systems will channel all of the hot air from the first CPU to the second on the board. They just compensate for it by using an over-abundance of high air flow. We want to keep the dB as low as possible though so the bigger a heatsink I can reasonably fit in a 4U the better.

 

We will be replacing the fans with some higher RPM ones though so I have the option to go over 1500RPM should I ever have the occasion arise.

I would have thought, and I am not familiar with this board or the case they are going in, that if you flipped the fans the other way, but there isn't the clearance on the RAM no? I use Noctua NH-D15s in my "normal" machines and the heatsink is high enough to clear standard or Vengeance LPX type memory. I even wonder if a 1 fan config may give you better thermals than directing hot air into the other cooler like that?

Intel i9-13900K - Gigabyte Aorus Z790 Elite DDR4 - Corsair Vengeance LPX 128GB DDR4 3200 C16 - Gigabyte Aorus Master RTX 4090 24GB - Corsair 4000D Airflow - 2x Samsung 980 Pro 1TB  - Corsair AX1600i 80 PLUS Titanium 1600W - Aorus FI27Q - Noctua NH-D15 running 3 fans (CPU) - 6 x NF-A12x25 (3 intake, 3 exhaust) - Aorus K1 - Aorus M5 - Aorus AMP500 - Aorus H5 - Corsair TC70 - Win 11 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, A1200 said:

I would have thought, and I am not familiar with this board or the case they are going in, that if you flipped the fans the other way, but there isn't the clearance on the RAM no? I use Noctua NH-D15s in my "normal" machines and the heatsink is high enough to clear standard or Vengeance LPX type memory. I even wonder if a 1 fan config may give you better thermals than directing hot air into the other cooler like that?

Not an option. For two reasons.

  1. The CPU cooler contact plate isn't square. It's a rectangle to conform to the shape of the CPU. So I can't rotate it 90°.
  2. Although I was not aware of it until now LGA3647 apparently has a Square-ILM mounting system. Didn't know that. This board however uses the Narrow-ILM mounting system which means it cannot be rotated 90°.

My only choice here would be to buy 120mm AIOs and mount them to the NORCO RPC-4224 fan wall but one thing at a time today let's get it powered on, see if it even POSTs then worry about temperature testing.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Windows7ge said:

Not an option. For two reasons.

  1. The CPU cooler contact plate isn't square. It's a rectangle to conform to the shape of the CPU. So I can't rotate it 90°.
  2. Although I was not aware of it until now LGA3647 apparently has a Square-ILM mounting system. Didn't know that. This board however uses the Narrow-ILM mounting system which means it cannot be rotated 90°.

My only choice here would be to buy 120mm AIOs and mount them to the NORCO RPC-4224 fan wall but one thing at a time today let's get it powered on, see if it even POSTs then worry about temperature testing.

Good luck! Be glad to see the finished build.

Intel i9-13900K - Gigabyte Aorus Z790 Elite DDR4 - Corsair Vengeance LPX 128GB DDR4 3200 C16 - Gigabyte Aorus Master RTX 4090 24GB - Corsair 4000D Airflow - 2x Samsung 980 Pro 1TB  - Corsair AX1600i 80 PLUS Titanium 1600W - Aorus FI27Q - Noctua NH-D15 running 3 fans (CPU) - 6 x NF-A12x25 (3 intake, 3 exhaust) - Aorus K1 - Aorus M5 - Aorus AMP500 - Aorus H5 - Corsair TC70 - Win 11 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Good news! We have POST! :old-grin:

 

P1000277.thumb.JPG.aa17ff717eddf020db85e748334f13dc.JPG

 

Monitoring idle power consumption from the BIOS I can see we're pulling about 180W.

 

P1000278.thumb.JPG.401c37068a2d0bb606ecb83a8c9b1d94.JPG

 

It says 238W because the UPS is being shared by a small server.

 

Both CPU's report along with all 192GB of RAM. This is very good to see.

 

2002166883_Screenshotfrom2021-12-2213-41-30.png.51759674f8f49046fc9777e25f5bb693.png

 

I went ahead and installed my OS of choice to the pair of SSD's, PROXMOX and everything is reporting as it should.

 

1792767725_Screenshotfrom2021-12-2213-58-16.png.537fb22be4b703df7c375a54fa0f208c.png

 

Idling in the OS CPU temps are looking very good and the fans aren't that loud.

 

195771938_Screenshotfrom2021-12-2214-03-27.png.287035fcaaa385758977ed03e5d61181.png

 

Now given this is open to the air on the motherboard box. So temps will go up a margin when it's actually installed. Let's see what happens when we crank up the system load. Will be back with the results later today.

Link to comment
Share on other sites

Link to post
Share on other sites

that's awesome! looking foreward to seeing CPU temps with that windtunnel you got going on.

Would taking out 1 fan from in between the coolers be any help you think? Maye get some more fresh air over the second tower?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RollinLower said:

Would taking out 1 fan from in between the coolers be any help you think?

I really don't know. My thoughts are it wouldn't make things better because moving hot air is better than moving no air. Suddenly the one fan closest to the Rear I/O would be under I think a ~25% increased load but I guess we can test your theory for shits'n'giggles. I want to give this new platform at least a solid week of testing before I go pulling the guts from my primary hypervisor server. I can't have this system going offline on me.

 

After about 2 hours I think we're safely at equilibrium, and the results are in:

 

1941283495_Screenshotfrom2021-12-2216-32-08.png.57b1085ce36a19b0074991a813b14ca0.png

 

This is a BOINC load across 84/88T. We're only turboing up to 2.7GHz but this isn't the most high-end model so meh...we're also pulling right around 400W which is surprising. I don't know why I expected higher.

 

From here we can try RollinLower's idea and see if it makes things better or worse for CPU2.

Link to comment
Share on other sites

Link to post
Share on other sites

I've initiated our little experiment. Now it's just a matter of waiting for it to hit equilibrium. I can say the room temperature has dropped one or two degrees so the results won't be perfectly representative of each other.

 

P1000287.thumb.JPG.cff66faf08343182e7e92028c7bbe74e.JPG

 

While we wait I have another thing for us to look at. Something I found strange is that this motherboard only has 7 fan headers but the IPMI & BIOS report that there are a lot more.

 

1528531271_Screenshotfrom2021-12-2220-43-48.png.6302109cc7c1cd595f3ac15b1892984f.png

 

_2, _2, _2, _2. What are these? Where are the headers?

 

So if anybody noticed this motherboard and another ASRock Rack board I've worked with before use these strange 6-pin power connectors for all of the fans.

 

13-140-022-V02.jpg.297304b82f124c3341b1da1704f59466.jpg

 

They're standard 3/4-pin compatible but have those extra two pins. Well, I wanted to know what they did so as any other sane person would do in this situation I broke out my multi-meter and started poking the board with little metal sticks :old-tongue:. And what I deduced is that these pins act as an additional GND & TACH for redundant fans. Meaning they're designed for some non-standard module. Probably only found in ASRock Rack servers.

 

It sounds like a good idea to be able to monitor both of the fan TACH's but there's no readily available fan splitter adapter made for these. So I guess we'll just have to make a few ourselves. These fan connectors use standard 1.54mm gapped pins. Same as what you'd use for your front panel headers. Meaning if I spliced a front header cable into a 4-pin female connector head I should be able to read the TACH on the second fan while it and the other fan get GND/12V/PWM off the other leads.

 

It'll all make more sense once I actually build it and have a picture for you.

 

Now that a bunch of time has passed it looks like our little fan experiment made things...marginally worse. 😆

 

306498504_Screenshotfrom2021-12-2221-05-20.png.e242eac69486612a1f9ba03edcd24361.png

 

So we had a 2~3°C increase. Looks like we're going with the Noctua Centipede.

Link to comment
Share on other sites

Link to post
Share on other sites

Heh Heh...

 

2141144215_Screenshotfrom2021-12-2313-53-02.png.e9d284da74106ab49e14754c22a810af.png

 

So as it was mentioned before there is no second fan header for two fans per CPU but IPMI reports that there is. And here's how I accessed it.

 

If you've ever paid attention to like a Noctua Y-fan splitter cable you know how it only provides 4 wires to one fan and 3 to the other? The missing leg on the second fan is TACH or otherwise what reports the fan RPM. So I modified three Y-splitters to give the second fan a TACH pin.

 

P1000288.thumb.JPG.86b05ffcaf3307b0a51a09670b98fece.JPG

 

From here I just had to identify the physical pins where CPU1_FAN1_2CPU2_FAN1_2 we're reporting from.

 

P1000290.thumb.JPG.6ad57775dadd9b3e09ef576344e7ecca.JPG

 

And voila redundant CPU fan RPM reporting working in the BIOS & IPMI. I made three because the chassis has two rear fans but again only one physical connector for controlling both.

 

The next thing I think we're going to investigate (but due to the holiday's I may not get around to today) is weather or not I can use my SFF-8087 breakout cables to utilize the onboard SATA ports in order to drive the 3.5" bays on the front of the chassis. 7 PCI_e slots are great and all but believe me when I say you can burn through them very quickly when hosting a hypervisor.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh, finally tracked this down. I didn't spot anything making it obvious which PCI_e slots went to which CPU but the manual has a full Block Diagram.

 

1897977338_Screenshotfrom2021-12-2314-25-54.thumb.png.24e8f78c102ab11818365d576ce0abfb.png

 

This is important in high bandwidth & low latency applications because the UPI Links between the CPU's only has so much bandwidth to go around and introduces additional delay. If I have something that is delay sensitive or requires very high data transfer rates this lets me know to favor PCI_e slots 2,5, & 7.

 

Slots 1, 3, 4, & 6 will still see plenty of use but I won't use them for the components that need the fastest speeds & access times like NVMe storage and SAN devices.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, RollinLower said:

i never realized BMC's where running DDR4 aswell! That's pretty sick lol.

...huh...I didn't even notice that detail. I was distracted by the fact with a combined 96 available PCI_e lanes it says that only a PCI_e 3.0 x1 lane goes to the PCH. The fuck? Had to research what DMI3 is and I hope it works cause that will provide PCI_e 3.0 x4. Future expansion plans include a SATA SSD array and I'll need 2GB/s bandwidth to the system on-top of whatever else goes through the PCH like one of the two boot SSD's.

 

A little ramble on my take of the IPMI overall design:

Spoiler

I can say on a good note the WebUI for the BMC is quite friendly and rather user intuitive. The menus are pretty easy to navigate. I will say though there looks to be a distinct lack of Live Sensor monitoring options. So at least as far as I can see so far there's no option to watch values change over time without just refreshing the page.

 

Additionally although online documentation shows there being a menu to control fan profiles from the IPMI WebUI the exact location where this menu is supposed to exist. It does not. So either it's not a feature of this motherboard or something went wrong and ASRock Rack opted to disable it rather than fix it. This means figure out your fan profiles in the BIOS and hope you never want/need to change them.

 

One nice thing I can say is they have an option for Console Redirection that doesn't rely on Java. It just runs in the browser which I like. Honestly I'd like to see more IPMI go this direction. Having to deal with Java updates bricking Console Redirection compatibility is seriously bullshit.

 

4 hours ago, RollinLower said:

also those modded fan cables look pretty slick, nice job on those.

I've been doing this for enough years to where I'm not afraid to cob-mod adapters of my own creation to meet an objective. Would have been nice if there was some way ASRock Rack could have designed this connector to accept two standard 4-pins instead of a weird 6-pin so people like us have these options. Or at the very least include adapters with the motherboard to convert it to dual 4-pin. A lot of people like me will want redundant fans on the heatsinks and as these connectors are you can't monitor the 2nd fan RPM without cobbing your own cable. 😕

Link to comment
Share on other sites

Link to post
Share on other sites

Late Saturday night update. I wanted to test if the SFF-8087 breakout cables I had on hand could be used to drives disks from the chassis's front backplane.

 

P1000293.thumb.JPG.7abdfcb2f1dc71b1e359a77f2d9d3ec6.JPG

 

It had been brought to my attention quite some time ago that there are two different kinds of breakout cables. There are Forward & Reverse breakout cables. Apparently the pin-out changes based on which direction the cable is connected and for this reason the Reverse cable exists in the event you want to connect a SFF-8087/8643 backplane to individual SAS/SATA ports on a motherboard. Normally you'd have it setup in the opposite manor.

 

What I didn't know is if this only applied to backplanes that had SAS Expanders built in or if direct 1:1 backplanes were affected in the same way. As it turns out. They are. Connecting a drive how I did in the photo above nothing showed up on the console.

 

Using a normal SATA cable though:

 

P1000294.thumb.JPG.e3903bd98dd56a9e335e35610e55fbbf.JPG

 

Fine'n'dandy. So I got a pair of 0.5 meter SFF-8087 Reverse Breakout cables put on order. This will save me a precious PCI_e slot.

 

So far all sensors are continuing to report back as expected. Both CPU's are staying under 60°C which is substantial headroom for when it actually goes in the chassis so tomorrow when I find the time I think we'll test another critical feature I want to see working. Virtualization Hardware Pass-Through. This is the process where-in the host kernel let's go of a hardware devices and allows the guest full control/access over the device treating it as if it were directly connected. Very cool/fun/important VM feature. Hopefully I can get it working because it has been known that things like BIOS Updates can break this feature. The system is already on the latest so hopefully it's working.

 

Til then! :old-grin:

Link to comment
Share on other sites

Link to post
Share on other sites

Frustrating. Frustraiting, frustraiting, frustraiting.

 

534378955_Screenshotfrom2021-12-2720-15-18.png.c08b2eb91cc4bfd50f36cb1afd060bdc.png

 

I ran into the exact same problem on the current hypervisor server. I'm going to have to dig into this because I don't have this issue on one of my AMD EPYC servers running the same OS.

 

VT-d is enabled and supported by the processor.

GRUB has been edited to enable IOMMU.

The PROXMOX equivalent to update-grub has been ran.

 

The issue persists...their documentation is useless. I guess I have to hound the fourms until I can find someone having similar problems.

Link to comment
Share on other sites

Link to post
Share on other sites

Alright. It's working. That was a PITA.

 

So I'll be passing through three/four devices for testing/demonstration purposes starting with a GPU, an HBA, a 10Gig network adapter, and a Western Digital 2TB HDD

 

P1000295.thumb.JPG.1953f239428c21e1ead46761c1cc7ca3.JPG

 

Unfortunately I could not get this GPU working so I had to use an old AMD Radeon HD 7970 I had laying around.

 

First and foremost I have to block the host from loading drivers into these PCI_e devices. There's multiple ways of doing this. I opted for blocking the device driver based on device address by writing a short script that basically intercepts the default driver for the device in a given slot at startup:

#!/bin/sh
PREREQS=""
DEVS="0000:18:00.0 0000:3b:00.0 0000:3b:00.1 0000:5e:00.0"
for DEV in $DEVS;
  do echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
done

modprobe -i vfio-pci

vfio-pci is the virtualization driver used by QEMU/KVM for hardware device pass-though.

 

With this done we just need to add the devices to our virtual machine:

 

1829055754_Screenshotfrom2021-12-2821-28-07.png.104fa6c81fa6c78e0f8ad4388fb9b591.png

 

Now in the virtual machine we can see from Device Manager that Windows sees all of these devices like they're directly connected and this offers a lot of expanded features & functionality over leaving everything up to the hypervisor.

 

device-manager.thumb.PNG.fea6f962e966a115ec65edcff3d9b7a4.PNG

 

In addition to this the physical GPU can be used as an active display for the VM if desired.

 

As a demonstration here is the motherboard display output of the PROXMOX server:

 

P1000298.thumb.JPG.1581cdd7b6f85b2bf57b3b52e0d539cc.JPG

 

Again, display output on the motherboard.

 

Now if we check the output of the GPU that you can just see if the background we get:

 

P1000299.thumb.JPG.467ed53858e857ebe48d8ec73363b106.JPG

 

:old-grin: I didn't pass-though any USB controller so we have no real keyboard/mouse interface but you could treat this like a desktop if you wanted.

 

From here we're aware now that we can:

  1. Provide a VM with GPU Acceleration.
  2. Provide a VM an HBA so you can directly connect & hot plug HDD's & SSD's to the active VM.
  3. Provide a VM high-speed/low latency networking. This could even be taken one step further by using a NIC that supports RDMA.

I think we'll do some network and system performance testing then call the hardware good & ready for swap-out.

Link to comment
Share on other sites

Link to post
Share on other sites

Finally some server builds 🤌🏼🤌🏼 😍😍

CPU: AMD Ryzen 5 5600X | CPU Cooler: Stock AMD Cooler | Motherboard: Asus ROG STRIX B550-F GAMING (WI-FI) | RAM: Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4-3000 CL16 | GPU: Nvidia GTX 1060 6GB Zotac Mini | Case: K280 Case | PSU: Cooler Master B600 Power supply | SSD: 1TB  | HDDs: 1x 250GB & 1x 1TB WD Blue | Monitors: 24" Acer S240HLBID + 24" Samsung  | OS: Win 10 Pro

 

Audio: Behringer Q802USB Xenyx 8 Input Mixer |  U-PHORIA UMC204HD | Behringer XM8500 Dynamic Cardioid Vocal Microphone | Sound Blaster Audigy Fx PCI-E card.

 

Home Lab:  Lenovo ThinkCenter M82 ESXi 6.7 | Lenovo M93 Tiny Exchange 2019 | TP-LINK TL-SG1024D 24-Port Gigabit | Cisco ASA 5506 firewall  | Cisco Catalyst 3750 Gigabit Switch | Cisco 2960C-LL | HP MicroServer G8 NAS | Custom built SCCM Server.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Sir Asvald said:

Finally some server builds 🤌🏼🤌🏼 😍😍

I wish the forum had a larger demographic for them but understandably it's not the target audience of the company.

 

I have a couple unique server build logs that will be coming down the road. When they're ready I'll let you know about them if you want.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Windows7ge said:

I wish the forum had a larger demographic for them but understandably it's not the target audience of the company.

 

I have a couple unique server build logs that will be coming down the road. When they're ready I'll let you know about them if you want.

Yes, I'd like to know about your other servers. 

 

 

CPU: AMD Ryzen 5 5600X | CPU Cooler: Stock AMD Cooler | Motherboard: Asus ROG STRIX B550-F GAMING (WI-FI) | RAM: Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4-3000 CL16 | GPU: Nvidia GTX 1060 6GB Zotac Mini | Case: K280 Case | PSU: Cooler Master B600 Power supply | SSD: 1TB  | HDDs: 1x 250GB & 1x 1TB WD Blue | Monitors: 24" Acer S240HLBID + 24" Samsung  | OS: Win 10 Pro

 

Audio: Behringer Q802USB Xenyx 8 Input Mixer |  U-PHORIA UMC204HD | Behringer XM8500 Dynamic Cardioid Vocal Microphone | Sound Blaster Audigy Fx PCI-E card.

 

Home Lab:  Lenovo ThinkCenter M82 ESXi 6.7 | Lenovo M93 Tiny Exchange 2019 | TP-LINK TL-SG1024D 24-Port Gigabit | Cisco ASA 5506 firewall  | Cisco Catalyst 3750 Gigabit Switch | Cisco 2960C-LL | HP MicroServer G8 NAS | Custom built SCCM Server.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×