Home
> Hardware, VMware, vSphere > Recommended BIOS Settings on HP ProLiant DL580 G7 for VMware vSphere
Recommended BIOS Settings on HP ProLiant DL580 G7 for VMware vSphere
The HP Proliant DL580 G7 has several important BIOS Settings which need to be set.
The default options are in Italic while the non-default (recommended) options are in Bold.
Options which are not relevant are left out.
You might need to change options if you have a specific configuration or specific needs. This list is only a general guideline for most of the settings tuned for vSphere.
Option | Value | Description |
System Options | ||
–Serial Port Options | ||
—-Embedded Serial Port | COM 1; IRQ4; IO: 3F8h-3FFh | The onboard Serial Port is left enabled in case Serial-Line logging at the server is needed. |
—-Virtual Serial Port | COM 2; IRQ3; IO: 2F8h-2FFh | The Virtual Serial Port is needed to do Serial-Line logging through iLO 3. |
–Embedded NICs | ||
—-Embedded NIC Boot Options | Disabled | If you don’t PXE boot the server, set it to disabled. Otherwise, leave it enabled. Setting it to disabled saves you 2 seconds when booting the server 🙂 |
–Advanced Memory Protection | Advanced ECC Support | Depending on your needs, you might want to select improved Memory Protection |
–USB Options | ||
—-USB Control | USB Enabled | Allows the use of Keyboard and Mouse during vSphere setup. |
—-USB 2.0 Controller | Enabled | Allows USB 2.0 high speed transfers. |
—-Removable Flash Media Boot Sequence | External DriveKeys First | Allows the host to boot from external USB keys (and iLO). Necessary for firmware updates, … |
–Processor Options | ||
—-No-Execute Memory Protection | Enabled | Needed for vSphere |
—-Intel® Virtualization Technology | Enabled | Needed for vSphere |
—-Intel® Hyperthreading Options | Enabled | Divides each core in 2 logical CPU’s, improving the vSphere CPU schedulers possibilities. |
—-Processor Core Disable | All Cores Enabled | No reason to disable cores. |
—-Intel® Turbo Boost Technology | Enabled | When not all cores are used, ESX will park those cores and over clock the other ones. When an ESX host is not using all its cores, the active cores will run faster resulting in faster VM speeds. |
—-Intel® VT-d | Enabled | Needed for vSphere (VMDirectIO, …) |
–NUMLOCK Power-On State | Disabled | In most server rooms or when using a laptop to access the server console, a numeric keypad is not available. |
Power Management Options | ||
–HP Power Profile | Custom | Allows to enable custom Power settings specific for vSphere |
–HP Power Regulator | OS Control Mode | Hands over the Power Management to vSphere. The other options give this control to the server itself. |
–Redundant Power Supply Mode | High Efficiency Mode (Auto) | By default (Balanced Mode), the server uses all installed PSU’s. This might look like the most efficient use, but the more power is drawn from a PSU, the more efficient it operates. The less power you draw from a PSU, the more gets lost to keep the PSU working. Thus, it is best to use the minimum amount of PSU’s so they deliver the highest possible output. The remaining PSU’s are placed in standby. This settings does not affect redundancy as the standby PSU’s jump in as soon as an active one fails. By using the ‘Auto’ mode, the active PSU’s are chosen based on the server’s serial number (odd or even number = odd or even PSU numbers). This makes sure that all power circuits in the racks are evenly used. |
–Advanced Power Management Options | ||
—-Minimum Processor Idle Power State | C3 State | Needed for vSphere Dynamic Voltage and Frequency Scaling (DVFS). Allows vSphere to halt unneeded cores. |
—-Maximum Memory Bus Frequency | Auto | Memory only runs at 1 speed in these servers -> 1066 MHz |
—-PCI Express Generation 2.0 Support | Auto | Server will detect PCIe Generation itself. Forcing it to PCIe 2.0 will make all PCIe 1.0 cards unusable. |
—-Dynamic Power Savings Mode Response | Fast | Switch faster between processor states. |
—-Collaborative Power Control | Enabled | Allows vSphere to control the PCC Interface |
—-DIMM Idle Power Saving Mode | Enabled | DIMMs can put themselves into Low Power mode when not used. This will save some power if not all memory is used on the host. |
Server Availability | ||
–ASR Status | Disabled | ASR monitors an agent running in the Service Console. When this does not respond within 10 minutes, the host is rebooted. However, if the agent fails or the Service Console becomes sluggish (even though the VM’s are perfectly fine), ASR will detect this as a system hang and will reboot the server. Furthermore, in case of a PSOD, ASR will reboot the server as well. This reboot might cause a loss of some logfiles. |
–ASR Timeout | 10 Minutes | This has no effect since ASR is disabled. |
–Thermal Shutdown | Enabled | To protect your server, it will be shutdown in case it gets too hot. |
–Wake-On LAN | Enabled | vSphere DPM uses Wake-On LAN to power on servers (it uses iLO first, but falls back on Wake-On LAN) |
–POST F1 Prompt | Disabled | The system boots if critical components fail. |
–Power Button | Enabled | Power Button behaves like it should |
–Automatic Power-On | Disabled | If set to enabled, the server will power-on as soon as AC Power is available. When set to disabled, power is restored to its previous state when AC Power is available. |
–Power-On Delay | No Delay | When AC Power is restored, all systems will come online at the same time causing a power spike. If the power system is unable to handle this, a random delay will solve this problem. |
BIOS Serial Console & EMS | ||
–BIOS Serial Console Port | Auto | |
–BIOS Serial Console Baud Rate | 9600 | |
–EMS Console | Disabled | |
–BIOS Interface Mode | Auto | |
Advanced Options | ||
–Advanced System ROM Options | ||
—-Option ROM Loading Sequence | Load Embedded Devices First | Embedded devices should be loaded first so PXE boot from onboard NICs is always possible. |
—-MPS Table Mode | Full Table APIC | vSphere needs this set to Full Table APIC |
—-ROM Selection | Use Current ROM | Backup ROM is only needed when a firmware flash was unsuccessful. |
—-NMI Debug Button | Enabled | Can be used to generate a NMI through the button on the system board. |
—-Virtual Install Disk | Disabled | This Virtual Install Disk only contains drivers for Microsoft Windows Operating system. |
—-PCI Bus Padding Options | Enabled | Disabling this option is only necessary for certain older expansion cards. |
—-Power-On Logo | Enabled | Disabling has no improvements on boot-times. |
–Video Options | Optional Video Primary, Embedded Video Disabled | Default setting. |
–Power Supply Requirements Override | Default Power Supply Requirements | PSU requirements will be calculated depending on server power requirements. |
–Thermal Configuration | Optimal Cooling | Fans will run at their minimum speed for adequate cooling. Saves some power since they don’t run at full speed (less noise as well) |
–Advanced Performance Tuning Options | ||
—-HW Prefetcher | Enabled | In previous CPU generations, disabling this options gave better performance. With the Nahalem architecture, it does provide benefits (better caching) |
—-Adjacent Sector Prefetch | Enabled | Similar to HW Prefetcher. |
—-Hemisphere Mode | Auto | Hemisphere will be enabled if your memory configuration allows it (see HP QuickSpecs for optimal Hemisphere modes) |
—-Node Interleaving | Disabled | Since vSphere utilizes NUMA nodes, there is no need to disable NUMA (= enable Node Interleaving) |
–Drive Write Cache | Disabled | Only the DVD-ROM drive is attached to the onboard SATA controller. Writes are not possible to this device thus it can be left disabled. This setting has NO effect on the Smart Array Controller settings. |
–Asset Tag Protection | Unlocked |
Nice article Sammy.
How did you find these best practices/recommendations? We’re using BL460 G6’s and I trying to find an HP whitepaper or document with the recommended Bios settings for vSphere. Still unable to find any.
Basically,
i took the HP ROM-Based Setup Utility User Guide and got through every single option investigating their impact on vSphere.
Some options were rather easy but some took some time to find a good answer.
Most settings should be available for you system, if they are not clear, post them here so we can discuss them.
gr
Thank.
I’ll go through the guide. If only HP would provide a whitepaper with recommended settings when using ESX, things would be a lot easier 🙂
hi
I Have Esxi 4.1 u1 installed & Run in one HP HP ProLiant DL580 G7 (4 x intel E7520).
if Intel Hyperthreading Options = Enabled AND Processor Core Disable = All Cores Enabled ,i have 4x4x2= 32 Logic CPU , Only 2 or 3 VM runing in the Host , but 8 vcpu per Guest VM, i can’t use all logic cpu full.
A.Intel Hyperthreading Options = Enabled AND Processor Core Disable = All Cores Enabled
B.Intel Hyperthreading Options = Disable AND Processor Core Disable = All Cores Enabled
C.Intel Hyperthreading Options = Disable AND Processor Core Disable = All Cores Disable
A or B or C, which one is better for me.
Lu,
what do you mean by ‘i can’t use all logic cpu full’.
Regarding choice A, B or C:
– Option C is not an option since then you would disable some cores and basically crippling your CPU. You now have 4 cores, and with option “Processor Core Disable” you can reduce this to 3, 2 or 1. This option is only used when you have software which is “pay-per-core” and when you only need the performance of 1 core. You can then disable the other 3 cores and license your software for only one core. Intel made this option available since a “single-core” CPU (and even a dual-core) is no longer sold.
– Option A & B: this depends on the type of applications you are going to run inside your VM. Some benefit from HT while other are negatively impacted. In general, most applications benefit from Hyperthreading, but you’ll have to test this.
Keep in mind that CPU 0 & 1 as seen in vSphere is in fact Core 0 of CPU 0. So both logical CPUs will never run to 100% at the same time. The sum of CPU 0 & 1 can be 100% at maximum since they are the same core, but split in half by hyperthreading.
Has anyone had problems with DL580 G7’s running ESX 4.1 ?
We’re getting crashes on all 6 new hosts with a message; “1:04:10:06.956 cpu0:4096)NMI: 2450: LINT motherboard interrupt (2173 forwarded so far). This is a hardware problem; please contact your hardware vendor.”
We have opened a call with HP but they don’t seem to have heard of it despite the fact its happening on all hosts every couple of days…
That’s weird.
Current client has more than 20 DL580 G7 hosts running and that issue hasn’t occured.
What BIOS version are you running + what hardware is in those boxes (CPU, Memory, …)?
What build of vSphere are you running?
It is highly recommended to install ESX 4.1 version provided by HP (https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=HPVM06)
This package contains VMware ESXi4.1 plus all the HP CIM providers and other drivers especially for this known HP NMI driver issue (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1021609)
Cheers,
True for ESXi, but Simon (and our client) still run ESX. Another good hint to move on to ESXi 🙂
Have updates ALL firmware using HP Smart Update Firmware DVD v9.30
ESX ver is: 4.1.0, 381591
Build is:
Item Description
Server Model HP DL580 G7
CPUs 4 * Intel Quad Core X7562 2.66GHz
Memory 256GB
Hard Drives 2 * 146GB
Array Controllers HP P411i
FC HBA N/A
NICs 2 * NC360T slot 7/8
NICs 2 * NC364T slot 9/10
NICs 1 * NC522 slot 1
PSUs 4 * 1200W
We also have DL580 G7’s in another cluster with ESX 4.0 and different NIC’s which are not experiencing the problem so it seems to be something around the NIC’s … although they are all on the compatability list…
We run build 320092 (so no Update 1 yet).
As you suggested, it could be related to your network cards.
At the client, we installed one NC375T (which is basically the same card as the quad-port onboard using a QLogic NetXen chip). So we have a total of 8 NIC ports, which all use the same driver in ESX (one of the reasons why we opted for that card).
We had some issues with the inbox driver (errors during vmotion) and updated to “http://downloads.vmware.com/d/details/esx4x_qla_nx_nic_dt/ZHcqYmRAdyViZGhwZA==”. There is even a newer version available from vmware on their website. As that drivers is also valid for the onboard NICs, you might want to look into it.
The NC522 you installed is also a QLogic NetXen chipset, while the NC360 & 364 are based upon Intel Chipsets. I would compare it to the other cluster and see what the differences are.
I wonder why you have chosen the Intel based NIC over the QLogic ones? Our main reason was to stick with one brand and more importantly, one driver on vSphere.
gr
I am currently having the same problems with my DL580G7s they have
CPUs 4 * Intel 6 core Xeon E7540
Memory 128G
Hard Drives 2 * 146GB
FC HBA N/A
Array Controllers HP P410i
FC HBA N/A
HP NC375i
Intel NC365T
HP NC522SFP * 2
My servers seem to randomly reboot every few weekends.
Just got in touch with Simon and an update of the system with the latest Firmware DVD solved his problems…
http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=us&prodTypeId=15351&prodSeriesId=4142916&prodNameId=4142792&swEnvOID=4064&swLang=8&mode=2&taskId=135&swItem=MTX-296feee7a65146b98dae800e00
gr
We finally gave up and ripped and replaced all of the NC522SFP+ cards with CNA1000E. Not the most inexpensive path but the CNA1000E are solid for us.
Useful thanks .. yes given that the error mentions CPU0 I think the issue being the NICs would make sense
As to choice of NIC’s that was our CSA … I’ll ask next time I see him 🙂
Keep me posted on the progress if you want. If an issue is found, it might be interesting for other people using 580 G7 servers.
gr
This may be relevant – http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&taskId=110&prodSeriesId=3913537&prodTypeId=329290&objectID=c02496982
I would recommend reading this. While its specifically a Citrix document, the Intel Errata on C3 power isn’t
http://support.citrix.com/article/CTX127395
Interesting one!
Basically, as long as you don’t change the Power Management settings in vSphere (Configuration – Hardware – Power Management), the C-States are not used. I plan on making a post with the different settings (Balanced, Low-Power, Custom) once i get some time to test it properly.
I agree 100% that in case of problems related to this that you need to disable the C-states (or set vSphere Power Management to ‘High Performance’ which is the default).
Thanks!
Very nice post, thanks for sharing. +1 hoping to see similar exhaustive listing from HP. Please note believe there’s a minor omission on the “Maximum Memory Bus Frequency” entry, memory runs in these servers @ 1066MHz, not 800 MHz.
Also seeing other ESX related posts where VT-d needed to be disabled for boot, YMMV
Thanks for pointing out the “Max Memory Bus Frequency”. It runs indeed at 1066. You can buy/install 1333MHz memory in it, but it will run at 1066 anyway so that would be a waste of money 🙂
Who has experience with these settings including the DL580 G7 and NC522SFP on ESXi 5.X? Currently have bizzare issues and really would like to trash all of these hosts.
Take a look at this link:
We had some serious issues with the 1 Gbps NetXen cards (NC375i & NC375t) and in the end HP replaced them with new revisions. That solved our issues.
There is a VMware KB article on it as well, but basically they say this is an HP (NetXen in this case) issue and they point to an HP Customer Advisory.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2012455
Guys, does anybody can advise- I got a problem.
Server: DL360 G5 (latest BIOS, Latest iLO firmware versions and etc.) Switches on the motherboard – all defaults.
My problem- After server shutdown I am loosing all my BIOS and iLO settings (Time; Licenses; passwords, network setup.) It doesn’t affected by restarted, only if I am shutting down system completely.
Any ideas? Thanks in advance!
Sounds to me like some hardware component is broken.
To be sure, you could try to ‘simulate’ this behavior with the harddrives pulled out (so basically with no OS installed). If you still have the settings lost after a shutdown, then you can be quite sure it’s a hardware problem… Call HP in that case 🙂
It seems your server battery just gave up. Believe 234556-001 is appropriate spares number http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=c00727353
Anyhow, as others suggested, you may want to place a service call
I will be posting a couple of articles on the choices for the new Romley series of servers – for virtualization.
They will mostly concern high loading memory (for virtualization etc.) at 3 DPC, 2 DPC on modern servers, and the limitations.
For now, here is an article on memory choices for the HP DL360p and DL380p virtualization servers.
I’ll get to the IBM System x3630 M4 server shortly.
May 24, 2012
Installing memory on 2-socket servers – memory mathematics
For HP:
May 24, 2012
Memory options for the HP DL360p and DL380p servers – 16GB memory modules
May 24, 2012
Memory options for the HP DL360p and DL380p servers – 32GB memory modules
Thank you for the useful table. I was looking exactly for this kind of information. I will print it out for my apprentice. Great teaching material.