VMware HPE Build with MSA 2062 Need help
5 hours ago, Bear_99 said:I will be most likely buying new switches as well. I saw these two in my build template that was given to me. So I figured since i need ilo I can get one that is 10GB switch and 1 larger 1GB switch to support ilo+ some redundancy between two nics.
- 1x Broadcom BCM57412 Ethernet 10Gb 2-port SFP+ OCP3 Adapter for HPE
- 1x Broadcom BCM5719 Ethernet 1Gb 4-port BASE-T OCP3 Adapter for HPE
Something to be aware of when it comes to NIC teaming and multiple paths for general network connectivity is that end to end you can only utilize a single path, there are caveats and it depends to this however it is best to think of it this way. That means if you add 2x or 4x 1Gb connections in to the mix you must make sure in your vSwitch configuration that only the single 10/25 Gb connection is under active and the 1Gb connections under backup otherwise you'll end up fighting random performance problems when traffic flows across the 1Gb paths when you don't actually want it to.
Think of your 1Gb connections only as a worst case redundancy option to avoid outages but while they are actively being used due to 10/25Gb being down you will have degraded performance. The benefit here is mainly in being able to do maintenance and firmware/software upgrades of the switches without outages which is not insignificant benefit.
Overall just caution on how useful you think the 1Gb connections are going to be even if teamed, they will still realistically only offer 1Gb of actual usable bandwidth rather than multiple of the number connected.
Pre-comments/Recommendations
Have a think about future cluster options and how each choice might come in to play. This cluster could for example be used as a staging ground to replace your existing main cluster and that could effect whether or not you go with SAS or iSCSI/NFS as well as single socket or dual socket server platform. If it makes sense to use this cluster to replace the existing one and will get expanded out you could go with dual socket servers with a single CPU and then later add the second CPU and RAM etc. The SAS option without a SAS switch (yes these did exist but old hat, https://docs.broadcom.com/doc/12348581) will limit you to 4 hosts.
I don't know if expanding this cluster at all make sense for your environment but if it does that may better justify the expenditure of dual 25Gb switches to give a better long term foundation.
While you are looking at this for your current use case maybe it might be worth considering how this could factor in to a wider longer term picture. It's also good to draw block diagram on a whiteboard of each option to visualize it and whatever one looks like the most sense typically is the best solution, plus it's also easier to see the differences between each.
P.S. All part options below are valid possible options for each server and are listed in their spec sheets, caution is advised however in that I have not done as close checking on parts compatibility and the requirements on using them as I would if I were looking at purchasing myself. Check with HPE or your MSP that these selections are able to be valid configurations, I did not check closely for the riser requirements or cabling requirements etc so stuff like that.
Below will be SAS connected options to your desired SAN
Configuration 1
HPE DL320 Gen11 8 SFF
- 1x Intel Xeon 6530 (Select High Perf Heatsink/Fan Kit)
- 8x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x Smart Array E208e-p SR Gen10 Controller (supported, see link below)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
https://www.hpe.com/psnow/doc/a50004315enw.pdf?jumpid=in_pdp-psnow-qs
Benefits:
- Less rack/cabinet RU used
- Single socket with more cores best satisfies VM 3 requirements
- Not inter-socket data communication
Cons:
- No NIC or HBA redundancy (minor)
With this server configuration you would connect to the MSA SAN using 12Gb SAS dual path with 1 port of the HBA going to 1 port on each of the MSA controller cards.
Configuration 2
HPE DL325 Gen 11 8 SFF
- 1x EPYC 9354P (Select High Per Heatsink/Fan kit)
- 12x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x Smart Array E208e-p SR Gen10 Controller (supported, see link below)
- 1x MR416i-p PCIe or MR416i-o OCP (different cable kit required for each option)
- 1x MCX631102AS OCP or BCM57414 OCP, or 2x MCX631102AS OCP or BCM57414
- 1x Potential Half Height, Half Length PCIe slot free (dependent on MR416i-o option being used)
https://www.hpe.com/psnow/doc/a50004297enw
Benefits:
- Less rack/cabinet RU used
- Single socket with more cores best satisfies VM 3 requirements
- Not inter-socket data communication
- Platform supports an additional OCP compared to Intel
Cons:
- No HBA redundancy (minor)
- Second PCIe slot when using NS204i-u restricted to Half Height
For the above you can go with two OCP network cards for card failure redundancy however to do that requires you use the PCI MR416i-p variant, unimportant other than configuration matrix requirements. Or you could choose one OCP NIC and the OCP MR416i-o and then choose two E208e-p SAS HBA cards for redundancy there, utilizing a single port per SAS card
I will not list DL360/365 Gen 11 options as there is no effective difference to the DL320/325 other than the DL360 supporting two OCP slots (2nd CPU required) and the same restrictions around those as the DL325 Gen11 above. The only benefit with either of these platforms is dual socket and going with two 16 core or 24 core CPUs, 24 core CPUs being more ideal for VM 3 but consuming more VMware licenses. For 24 core choices either Intel Xeon 6542Y or AMD EPYC 9254 (maybe 9274F but $$).
Configuration 3
HPE DL380 Gen 11 8SFF
- 1x Intel Xeon 6530 (Select High Perf Heatsink/Fan Kit)
- 8x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x Smart Array E208e-p SR Gen10 Controller (supported, see link below)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
- 1x PCIe slot usable and free
https://www.hpe.com/psnow/doc/a50004307enw.pdf?jumpid=in_pdp-psnow-qs
Benefits:
- Can be expanded to an additional second CPU with supporting riser and OCP slot expansions
- More PCIe slots and expansion capabilities, same as point 1 but more about total PCIe slots given 2U chassis
- Quieter operation, 2U
Cons:
- Requires more rack/cabinet RU space
- Can be more expensive platform being 2U and dual socket, HPE discount heavily dependent
Configuration 4
HPE DL385 Gen 11 8SFF
- 1x Intel Xeon 6530 (Select High Perf Heatsink/Fan Kit)
- 8x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x Smart Array E208e-p SR Gen10 Controller (supported, see link below)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
- 1x PCIe slot usable and free
https://www.hpe.com/psnow/doc/a50004300enw.pdf
Benefits:
- Can be expanded to an additional second CPU with supporting riser and OCP slot expansions
- More PCIe slots and expansion capabilities, same as point 1 but more about total PCIe slots given 2U chassis
- Quieter operation, 2U
Cons:
- Requires more rack/cabinet RU space
- Can be more expensive platform being 2U and dual socket, HPE discount heavily dependent
Configuration 5
HPE DL380 Gen 11 8SFF
- 2x Intel Xeon 6526Y or 6542Y(Select High Perf Heatsink/Fan Kit)
- 16x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x Smart Array E208e-p SR Gen10 Controller (supported, see link below)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
- 1x PCIe slot usable and free
https://www.hpe.com/psnow/doc/a50004307enw.pdf?jumpid=in_pdp-psnow-qs
Benefits:
- Can be expanded to an additional second CPU with supporting riser and OCP slot expansions
- More PCIe slots and expansion capabilities, same as point 1 but more about total PCIe slots given 2U chassis
- Quieter operation, 2U
Cons:
- Requires more rack/cabinet RU space
- Can be more expensive platform being 2U and dual socket, HPE discount heavily dependent
Configuration 6
HPE DL385 Gen 11 8SFF
- 2x AMD EPYC 9124(9174F) or 9254(9274F) (Select High Perf Heatsink/Fan Kit)
- 24x 16GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x Smart Array E208e-p SR Gen10 Controller (supported, see link below)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
- 1x PCIe slot usable and free
https://www.hpe.com/psnow/doc/a50004300enw.pdf
Benefits:
- Can be expanded to an additional second CPU with supporting riser and OCP slot expansions
- More PCIe slots and expansion capabilities, same as point 1 but more about total PCIe slots given 2U chassis
- Quieter operation, 2U
Cons:
- Requires more rack/cabinet RU space
- Can be more expensive platform being 2U and dual socket, HPE discount heavily dependent
Decided against showing you Gen 10 Plus v2 options since the CPUs themselves are past the 3 year mark and while totally fine may not be the best choice, performance wise they would be fine. Everything for Gen 10 v2 is basically identical to the above with the exception of the NS204 boot device being a PCIe card that consumes a slot. For CPU options of these if you want to consider them then Intel Xeon 6346 (16 cores) or 6342 )24 cores), AMD EPYC 7343/73F3 (16 cores) or 7443/74F3 (24 cores) and all comments about 24 cores and VM 3 configuration requirement applicable.
Below will be iSCSI or NFS connected options to your desired SAN (I don't think the MSA supports NFS but this is an option for a different storage solution, we use NFS with NetApp)
Benefits/Cons for each will be isolated to iSCSI/NFS vs SAS, see above for more general platform comments. Overall going with iSCSI/NFS may allow you to have more cluster expandability and flexibility and the reduction in PCIe hardware for some option may make 25Gb switch within budget. They difference here is the possibility to grow past 4 servers more easily and to be able to justify dual 25Gb switch and potentially share general network and storage network across the same physical connections, or can be separated out. Offering it as an alternative however not necessarily any better or worse than going SAS.
If you stick will 10Gb switches you could still safely achieve that with 4 connections per server, 2 for general network and 2 for storage.
Configuration 1
HPE DL320 Gen11 8 SFF
- 1x Intel Xeon 6530 (Select High Perf Heatsink/Fan Kit)
- 8x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
https://www.hpe.com/psnow/doc/a50004315enw.pdf?jumpid=in_pdp-psnow-qs
Benefits:
- Minor cost reduction removing SAS card
- Reduced cabling and potentially switching requirements
Cons:
- No NIC redundancy (minor)
- Sharing storage and general network bandwidth across same physical connections
Configuration 2
HPE DL320 Gen11 8 SFF
- 1x Intel Xeon 6530 (Select High Perf Heatsink/Fan Kit)
- 8x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
- 1x MCX631432AS or BCM57414 PCIe (these are 10/25 Gb and I recommend that over only 10Gb)
https://www.hpe.com/psnow/doc/a50004315enw.pdf?jumpid=in_pdp-psnow-qs
Benefits:
- NIC redundancy
- 10Gb networking more viable with 4 connections active per server
Cons:
- Slight cost increase over configuration 1, extra NIC
Configuration 3
HPE DL325 Gen 11 8 SFF
- 1x EPYC 9354P (Select High Per Heatsink/Fan kit)
- 12x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x MR416i-p PCIe or MR416i-o OCP (different cable kit required for each option)
- 1x MCX631102AS OCP or BCM57414 OCP, or 2x MCX631102AS OCP or BCM57414
https://www.hpe.com/psnow/doc/a50004297enw
Benefits:
- Minor cost reduction removing SAS card
- Reduced cabling and potentially switching requirements
Cons:
- No NIC redundancy (minor)
- Sharing storage and general network bandwidth across same physical connections
Configuration 4
HPE DL325 Gen 11 8 SFF
- 1x EPYC 9354P (Select High Per Heatsink/Fan kit)
- 12x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x MR416i-p PCIe or MR416i-o OCP (different cable kit required for each option)
- 2x MCX631102AS OCP or BCM57414 OCP
https://www.hpe.com/psnow/doc/a50004297enw
Benefits:
- NIC redundancy
- 10Gb networking more viable with 4 connections active per server
Cons:
- Slight cost increase over configuration 4, extra NIC
Configuration 5
HPE DL380 Gen 11 8SFF
- 1x Intel Xeon 6530 (Select High Perf Heatsink/Fan Kit)
- 8x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
- 1x PCIe slot usable and free
https://www.hpe.com/psnow/doc/a50004307enw.pdf?jumpid=in_pdp-psnow-qs
Going to save repeating myself here Benefits/Cons wise, I'm sure you can assess that
Configuration 6
HPE DL380 Gen 11 8SFF
- 1x Intel Xeon 6530 (Select High Perf Heatsink/Fan Kit)
- 8x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
- 1x MCX631432AS or BCM57414 PCIe (these are 10/25 Gb and I recommend that over only 10Gb)
https://www.hpe.com/psnow/doc/a50004307enw.pdf?jumpid=in_pdp-psnow-qs
Going to save repeating myself here Benefits/Cons wise, I'm sure you can assess that
Configuration 7
HPE DL385 Gen 11 8SFF
- 1x Intel Xeon 6530 (Select High Perf Heatsink/Fan Kit)
- 8x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x MR416i-p PCIe
- 1x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
- 1x MCX631432AS or BCM57414 PCIe (these are 10/25 Gb and I recommend that over only 10Gb)
https://www.hpe.com/psnow/doc/a50004300enw.pdf
Going to save repeating myself here Benefits/Cons wise, I'm sure you can asses that
Configuration 8
HPE DL380 Gen 11 8SFF
- 2x Intel Xeon 6526Y or 6542Y(Select High Perf Heatsink/Fan Kit)
- 16x 32GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x MR416i-p PCIe
- 2x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
- 1x PCIe slot usable and free
https://www.hpe.com/psnow/doc/a50004307enw.pdf?jumpid=in_pdp-psnow-qs
Going to save repeating myself here Benefits/Cons wise, I'm sure you can asses that
Configuration 9
HPE DL385 Gen 11 8SFF
- 2x AMD EPYC 9124(9174F) or 9254(9274F) (Select High Perf Heatsink/Fan Kit)
- 24x 16GB RAM
- 1x NS204i-u (has it's down dedicated special slot in this server, does not consume a PCIe slot)
- 1x MR416i-p PCIe
- 2x MCX631432AS or BCM57414 OCP (these are 10/25 Gb and I recommend that over only 10Gb)
https://www.hpe.com/psnow/doc/a50004300enw.pdf
Going to save repeating myself here Benefits/Cons wise, I'm sure you can asses that
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now