,

VMware VM fails to load NVIDIA vGPU P40 with more than 32GB RAM

When booting a VM configured with equal to or more than 32GB RAM, it will fail to load the vGPU.

Note the VM will boot unto 52% and often get stuck.

In order to use more than 32GB of RAM use the configuration parameter within the VM’s advanced configuration options;

pciPassthru.use64bitMMIO=”TRUE”

One you’ve enabled the option the VM should boot.

If you still have issues, checkout my other post here

,

VMware Horizon 7 version 7.5 is GA

Good to see VMware Horizon 7 version 7.5 is GA, several new features and enhancements within this release, categories below!

  • Horizon Connection Server
  • Horizon Agent for Linux
  • Horizon Agent
  • Horizon GPO Bundle
  • Horizon Client
  • Horizon JMP Server
  • Horizon 7 Security

Release notes here – https://docs.vmware.com/en/VMware-Horizon-7/7.5/rn/horizon-75-view-release-notes.html

Download here – https://my.vmware.com/group/vmware/info/slug/desktop_end_user_computing/vmware_horizon/7_5

,

So What’s New in VMware vSphere 6.7?

vSphere 6.7 has been officially released which is great, we’ve been running on 6.5 for a while now! I’ve listed out some highlights that interested me in this release.

Upgrades

First thing to check out the VMware HCL, VMware appears to have dropped support for several popular processors, checkout your hardware here

The HTML 5 Web Client

How we’ve been living with the flash web client for a while, personally I hate it and will do anything to avoid using, however we now have light at the end of the tunnel, the HTML 5 Web Client nearly has feature parity and I can use it for 95% of my tasks!

Suspend and Resume of vGPU Workloads

I’m a big fan of vGPU and what possibilities they enable, vGPUs have been around since vSphere 6.0, before the vSphere 6.7 release the VMs that used vGPUs were effectible glued to the host they were powered up on. vSphere 6.7,  you can suspend and resume a vGPU-enabled VM, which means you can suspend, vMotion and then resume. Hopefully with the next release of vSphere we’ll see live vMotions enabled.

Virtual Hardware Version 14

From what I can see version 14 adds support for Trusted Platform Module (TPM), NVDIMM, I/O Memory Management and Microsoft Virtual-based Security (VBS).

vCenter Appliance Backup

You can now set up a backup schedule to backup your the vCenter appliance configuration. You can also configure the retention of the backups.

Configuration Maximums

As usual VMware has uplifted the configuration maximums, see the table below for details;

Rather than display a huge table here, checkout the VMware tool here

ESXi Single Reboot

Two reboots during upgrades should be a thing of the past going forward!

ESXi Quick Boot

Firstly I need to note this feature is limited to specific vendors/hardware. This feature means the hypervisor can be restarted without going through the hardware boot process. This should mean patching and upgrades are completed much more promptly!

ESXi 5.5 > 6.5 upgrade fails with an error “Permission denied“

Just a quick post;

While upgrading ESXi from 5.5 to 6.5 I came across the following error;

This error is caused due to ESXi creating a partition labelled number 2 of partition ID ‘fc‘ (coredump) when ESXi doesn’t detect a harddisk / LUN.

How did I resolve it?

Firstly list out the partitions using the following command

  • esxcli system coredump partition list

From the output you’ll note partition 2 being used for coredumps, now we need to move this using the commands below;

First set the coredump to another partition, partition 7 used below for example

  • esxcli system coredump partition set –partition=”mpx.vmhba32.:C0:T0:L0:7″

Next enable;

  • esxcli system coredump partition set –enabled=true

Finally list out the partitions to ensure the coredump change has taken;

  • esxcli system coredump partition list

 

 

 

,

VMware vSAN 6.7 has landed!

I’m a big advocate of VMware vSAN and 6.7 is a massive release, VMware have added some much needed features! A quick overview below.

HTML5 User Interface

  • An interface which allowed full vSAN management with the familiarity of other VMware products.

vSAN ReadyCare

  • Providing real time health, support and remediation recommendations.

Enhanced Stretched Cluster Availability

  • Significant enhancements to logic regarding site failures among other enhancements.

Proactive Support via vSAN Support Insight

  • vSAN will proactively raise alerts before they become issues.

Disk Support

  • vSAN now supports 4Kn disk drives

Read more details here – https://www.vmware.com/uk/products/vsan/whats-new.html

,

VMware VM fails to load NVIDIA vGPU with more than 32GB RAM

When booting a VM configured with equal to or more than 32GB RAM, it will often fail to load the vGPU. The VM will load in VMware SVGA mode.

Note the vGPU will be present in Windows Device Manager with a warning sign, the error will be as follows;

Windows has stopped this device because it has reported problems. (Code 43)

The vGPU reserves a portion of the VM’s framebuffer for use in GPU mapping of VM system memory. This reservation is normally sufficient to support up to 32GB of RAM, in order to use RAM use the configuration parameter within the VM’s advanced configuration options;

pciPassthru0.cfg.enable_large_sys_mem

One you’ve enabled the option the VM should boot with the vGPU also loading.

If you still have issues, checkout my other post here

,

Black screen on vGPU using VMware Horizon?

You may experience a black screen when trying to connect to a VDI when using a vGPU. The issue is more apparent when using multiple monitor or high resolution displays.

To correct this issue you need to allocate the VDI/VM with more video memory, follow the guide below to add this.

  • Open the Settings tab on the VM
  • Select video card.
  • Select Specify custom settings
  • Adjust the video memory to a high value, try 64mb to start with.
,

VMware vSAN 6.5 – 2 Node with Direct Connect

VMware vSAN 6.5  now supports two vSAN data nodes directly connected using one or more crossover cables. This is useful for clients with no 10GbE switching!

In order for this to work a Witness VMkernel needs to run on an interface that can also reach the witness vSAN VMkernel interface for Metadata purposes, this would not be possible via the crossover cables for obvious reasons.

Now, how do you setup a Witness VMkernel?
Within a normal vSAN setup VMkernel ports are tagged to have “vsan” traffic via the vSphere Web Client. However in order to use a VMkernel for “Witness” traffic we have to dive into command line for the moment in 6.5.

To add a new interface with Witness traffic is the type, the command is:

  • esxcli vsan network ipv4 add -i vmkX -T=witness

Personally I design solutions with two 10GbE crossovers with an active/standby setup and run vMotion on the standby interface.

Horizon 7.1 > 7.3.1 Upgrade Issue

Had a strange issue raised after a client upgraded from Horizon 7.1 to 7.3.1, where thin terminals were reporting the following error “You cannot access your applications or desktops”. After some research it appears the default behaviour was changed in 7.3.1 for the following;

  • If no is selected for the “Allow users to chose protocol” the pool can only use default protocol selected. So HTML access for example will not work. Change this to yes to fix the error,
  • If you have vGPU pools, the “Allow users to chose protocol” option is disabled!! So for the moment the only option is to roll back the install until VMware fix the issue.

Update – VMware has reversed this change in 7.4 with the following release notes;

  • When creating a pool or a farm, if you select No for the option “Allow users to chose protocol” then a pool or application can only be launched via the default protocol selected. The PCOIP protocol disallows any connections via HTML access and in the case of VGPU enabled pools, the “Allow users to chose protocol” option was disabled, so an administrator could not change it back to Yes. This change has been reverted.
,

VMware Network Issue – ESXi 6.5 Dell R730 with Intel X710 VLAN

I recently deployed some Dell R730 servers with the Dell customised ESXi ISO and had a rather odd networking issue once VLANs were being tagged in the VMware platform;

  • Network connectivity was only working at a layer 2
  • Unable to ping the default gateway
  • The VM guests also had the issues above
  • Switch was also unable to ping the hosts

After some head scratching I decided to look at the drivers being used for the Intel X710 10GbE networking cards, the driver in use was i40en. After removing this driver using the command below, the host reverts to using the i40e(1.1.0) driver and the network connectivity was working and the VLANs tagged as expected.

  • esxcli software vib remove -n i40en

Now after some reading it would appear the correct driver to use for the Intel X710 card is i40e(2.06).

The correct process to fix this issue is;

  1. Install the updated i40e drivers.
  2. Uninstall the i40en drivers.
  3. Reboot the ESXi host. ESXi should now start using the newer i40e (2.06) drivers for the X710 nic.