Jason Fenech Author of Altaro DOJO | VMware https://www.altaro.com/vmware VMware guides, how-tos, tips, and expert advice for system admins and IT professionals Mon, 08 Aug 2022 07:19:32 +0000 en-US hourly 1 How to create vSphere VMs with RDM disks https://www.altaro.com/vmware/vsphere-vm-rdm/ https://www.altaro.com/vmware/vsphere-vm-rdm/#comments Wed, 11 Apr 2018 11:30:34 +0000 https://www.altaro.com/vmware/?p=18570 What are RDM disks and how why do we need them? This guide explains why using RDM disks makes sense for creating vSphere VMs in certain situations as well as a simple walkthrough on how to use them in practice.

The post How to create vSphere VMs with RDM disks appeared first on Altaro DOJO | VMware.

]]>

Let’s start by defining what is an EDM disk. It’s short for Raw Device Mappings, a mechanism by which virtual machines are allowed direct access to SAN/NAS storage. In practice, this means that a virtual machine’s ESXi RDM disk maps directly to a chunk of storage, a LUN perhaps, on some networked storage. Traditionally, a VM disk or VMDK consists of storage space carved out of the underlying datastore, i.e. the same place where the rest of the VM’s files reside, as per the default setting that is.

So what’s special about VMware RDM disks? Well, they tend to come in handy in a few specific scenarios. One common use is to grant machines concurrent access to the same resources. A cluster’s data or quorum disks come to mind where virtual machines have been set up as Microsoft Failover Cluster Services (MSCS) nodes, more so when each node resides on a different ESXi host. This latter scenario is referred to as a Cluster Across Boxes. Further details and guidelines on how to set up a Microsoft Cluster on VMware can be found here and here.

VM Mapping

There are other benefits to using RDM disk among which dynamic name resolution, file permissions, snapshots, vMotion, and a few others that are listed here. That said, RDM disks should be used sparingly and only for specific scenarios. More often than not, one should stick to using traditional VMDK on VMFS since there’s no performance gain to be made with RDM. If anything, things might run a little bit slower. One should also keep in mind that a LUN used for this purpose can only map to one RDM disk, so again, use sparingly if SAN storage space is at a premium.

Disclaimers apart, let’s have a look at how one can create a VM with a VMware RDM disk.

Creating the LUN

We first need to make the LUN, the one the ESXi RDM disk will map to, is visible by ESXi. But before we do, let’s go ahead and create an iSCSI target. For this post, I’ll be using my home NAS.

The process of creating iSCSI targets, LUNs, etc. holds true irrespective of the storage solution in use. At its most basic, you first need to enable the iSCSI service, create an iSCSI target and map a LUN (chunk of storage space) to it. After that, you can then decide whether you want to secure the LUN by setting initiators (which machines have access to the LUN), CHAP authentication, and so on.

Creating the LUN

Here’s a 10GB LUN created on my Seagate NAS even though it shows up as being 11GB in size!

Creating the LUN to ESXi

The next step is to configure the iSCSI Software Adapter on ESXi. I’m going to illustrate this process using a freshly installed stand-alone ESXi host, one that is not managed by vCenter Server. To do this, launch the host client from the https://<ESX IP address> URL and run the following steps.

Step 1: Log on as root, and navigate to Storage -> Adapters. Click on Configure iSCSI as shown next.

Creating the LUN to ESXi

Step 2: Assuming it is disabled, enable the iSCSI Software Adapter by selecting Enabled (1). Click Add Static Target (2) and type in the Target name and IP Address (3). This information should be available from your storage solution’s user interface or by running the appropriate commands from the console. Hit Save Configuration (4) to exit setting up the adapter.

Edit SiteLUN is not listed, hit the Rescan buttoniSCSI Software Adapter

Step 3: To verify that the LUN is visible to the host, change over to the Devices tab (1) and look for a disk matching the capacity of the one created on your storage solution (2). Actually, do this before you add the LUN, so you can then easily compare the before and after device lists. If the LUN is not listed, hit the Rescan button (3) and recheck the details entered in the static targets section.

Important: Make sure that the device is not used to create a new datastore or perhaps extend an existing one.

 

Adding an RDM to a VM

As an example, I’ll be adding a VMware RDM disk to a Windows Server VM using the LUN previously created. To create the RDM disk, you can use either ESXi’s host client (or the vSphere Web Client if the host is vCenter Managed) or vmkfstools from the command line. In this example, we are using the vSphere client but you can also add RDM disk using PowerShell with PowerCLI.

Step 1: To add the disk to the VM, right-click on the VM and select Edit Settings.

Step 2: From the Virtual Hardware tab (1), click on New raw disk and select the Add new RDM disk option (2).

Adding an RDM to a VM

 

Step 3: The non-allocated LUN (or device) should be listed next. Highlight it and press the Select bottom as shown.

Step 4: On the Edit Settings screen, navigate to the New Hard Disk section (1) and expand it. Set the Disk Compatibility mode to either Physical or Virtual (2).Press Save (3) to complete the RDM disk addition process.

To quote VMware, the difference between these 2 modes is as follows:

    • RDM in virtual compatibility mode, LUN behaves as if it were a virtual disk. The RDM can use snapshots.
    • In the physical compatibility mode, the RDM disk offers direct access to the SCSI device for those applications that require lower-level control.

New Hard Disk section

Step 5: Finally, ensure that the guest OS can access the VMware RDM disk. In the next screenshot, you can see in Disk Management that Windows has correctly detected the 10GB disk. All that’s left now, is to bring the disk online, create and format the volume unless of course, you wish to leave the disk raw.

VMware RDM disk

So, what’s inside an RDM file?

For completeness’ sake, I’ll briefly cover what exactly comprises an RDM disk in terms of VM files.

As can be seen in the next screenshot, I’m inspecting a VMware RDM disk file using the more command from ESXi’s shell. Prior to this, I took a note of the RDM disk’s filename from the VM’s settings. Looking at the directory listing in putty, there are at least 2 files associated with the RDM disk. Just like you’ll find with conventional VMDK disks, VMware RDM disks have a descriptor, the output of which can be seen below.

Under the Extent Description section, you’ll find a reference to a mapping or pointer file which is listed as <VM NAME>-rdm.vmdk. This latter file is what maps to the LUN we previously mounted, as a device, in ESXi.

It’s good to know how this information is used as it allows you to recreate an RDM disk if the virtual disk is either corrupted or not a supported format RDM. The whole process is detailed in this KB article.

recreate an RDM disk if the virtual disk is either corrupted or not a supported format RDM

 

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

Conclusion

RDM disks should be used sparingly. Even then, use them to cater for specific cases such as clusters and to offload I/O workloads, such as SAN snapshots. You’ll find that an RDM disk is, in fact, a VMDK file only that in this case, it contains metadata (mappings) on how to reach a specific LUN. As we’ve seen, the benefits are many but traditional VMFS datastores are always preferred when creating VM disks. There are also a number of limitations to RDM you should be aware of. These are listed here.

If you’d like to learn more about VMware storage, do have a look at our 2-part series titled Of Storage, Protocols, and Datastores. If there’s anything in this post you’re unsure about don’t hesitate to write to me using the comments below and I’ll get back to you as soon as possible.

The post How to create vSphere VMs with RDM disks appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/vsphere-vm-rdm/feed/ 2
How to Reset the ESXi Root Password https://www.altaro.com/vmware/reset-esxi-root-password/ https://www.altaro.com/vmware/reset-esxi-root-password/#comments Wed, 28 Mar 2018 15:57:03 +0000 https://www.altaro.com/vmware/?p=18266 Forgetting passwords can be a big pain. According to VMware if you lose your ESXi password you have to re-install but there is another way. Learn how you can reset the ESXi root password without having to reinstall ESXi from scratch.

The post How to Reset the ESXi Root Password appeared first on Altaro DOJO | VMware.

]]>

Forgetting passwords is something that unfortunately happens to everyone, and resetting the ESXi root password requires a bit of attention from your side. And that’s why password managers exist. No, it’s not ok to write them down on yellow sticky notes stuck to your monitor unless you want to give your security guys a heart attack. I guess, given this post’s title, you know where I’m going with this if you forgot your ESXi root password.

It’s 10 in the evening. You get a call and start troubleshooting right away. You figure that a management services restart will fix the issue. Your host is connected to a remote KVM switch, so you press F2 and type in the password. No dice. Maybe, it’s a typo maybe not. You try again, and again and end up locking yourself out because of a forgotten root password. You did save the ESXi password but along the way, you changed it and forgot to update it in your password manager. According to VMware, the only supported fix is to re-install ESXi unless you’re still running ESX which is highly unlikely.

In pre-ESXi era, the hypervisor had a service console that enabled you to boot in single-user mode. This allowed you to change the password from bash. Incidentally, this method can still be used nowadays to change the root password of a vCenter Server appliance. No such thing for ESXi.

In today’s post, I’ll show you how you can use a Live Linux CD/DVD, to change the root password on your ESXi host. VMware does not support this method citing complexity, but I don’t buy this – there is nothing really complex about it. ESXi saves the root password encrypted in /etc/shadow as is standard with Linux.

An invalid password typed in at the console

An invalid password typed in at the console

How it all works

First off, SSH to your host and have a look at /etc/shadow. You should see something like this.

ESXi password

This is from a test ESXi host I use, so be my guest and try to reverse hash the password. Good luck with that. The string boxed in red is what we’re after. Deleting it will reset the password to null. Of course, if you can’t root to your host, there’s no way you can do this, hence why we use a live CD. Booting off a Linux Live CD/DVD allows us to access and change the file. The trick is knowing which file to change. Changing the one that’s accessible when SSH’ed to the host is of no use since the changes are overwritten once you boot up the host.

As you probably know, ESXi uses several disk partitions. One, in particular, is called bootbank. This partition contains the hypervisor core files and the host’s configuration which is what ends up being loaded into memory. The partition, by default, is called /dev/sda5.

The /etc/shadow file we’re after is found in a compressed archive called state.tgz which is found under /dev/sda5. So, here’s what we need to do.

    • Download a Live Linux CD/DVD. Take your pick from this list. I chose the Gparted LiveCD one.
    • Burn a USB or CD/DVD with the Live CD/DVD and boot your host off it.
    • Mount /dev/sda5 and copy state.tgz to a temp folder.
    • Uncompress state.tgz and edit the shadow file.
    • Recompress the archive and overwrite state.tgz with it
    • Unmount and reboot the host.

How to reset ESXi root password

The following procedure documents how one would go about resetting the password for root on ESXi 6.5 host. This should work on earlier versions of ESXi though I only tested it on 6.x. It also makes no difference whatsoever if the host is physical or nested.

It is of utmost importance to note that you will not be able to ‘deceive’ ESXi’s security and change the node’s root password without powering it off. Meaning you need to evacuate the VMs to other hosts in the cluster or shut them down to place the host in maintenance mode.

For this post, in order to reset the ESXi root password, I’m using a nested host for convenience’s sake alone. And, yes, I carried out this same procedure a number of times on physical ESXi hosts. Note also, that the host must be powered down for this to work so unless migrated, all hosted VMs will obviously stop working.

Step 1 – Insert the bootable Live CD, make sure your server can boot off CD/DVD or USB and power it up. If you’re using the Gparted LiveCD, just follow the on-screen instructions as it is loading.

Booting off the GParted LiveCD

Booting off the GParted LiveCD

ESXi on a USB deviceStep 2 – Locate the 2 partitions sized 250MB. As mentioned, /dev/sda5 is what we’re after assuming you installed ESXi on the first available disk. This may differ if, for instance, you installed ESXi on a USB device.

 

GParted listing the ESXi partitions found on the primary disk. Your mileage may vary according to the size of the boot drive and the medium (SD, USB, drive…).

ID

Name

Description

Size

1

System boot

Used to boot the OS.

4MB

2

Scratch

Persistent storage of VMware support bundles. Created if media is larger than 8.5GB.

4GB

(Dynamic)

3

VMFS datastore

Any remaining unallocated space is used to create a local datastore. Created if media is larger than 8.5GB.

Remaining space.

(Dynamic)

5

Bootbank (bootbank 0)

Store the current ESXi image.

250MB

6

Altbootbank (bootbank 1)

Stores the previous ESXi image after an upgrade. Used for rollback operations.

250MB

7

vmkDiagnostic (small core-dump)

Capture the output of a purple diagnostic screen in case of ESXi crash.

110MB

8

Store (locker)

Storage of ISOs for VMware tools.

286MB

9

2nd diagnostic partition (large core-dump)

Additional space for coredumps to avoid logs truncation. Created if media is larger than 3.4GB.

2.5GB

vSphere 6.x partitions layout.”

Note that the partition layout changed dramatically in vSphere 7 compared to vSphere 6.x. It is now consolidated in fewer partitions leveraging dynamic sizing and VMFS-L.

vSphere 6.x vs vSphere 7 partitions layout

vSphere 6.x vs vSphere 7 partitions layout

Step 3 – Open a terminal window and run the following commands in the exact order as listed.

sudo su

mkdir /boot /temp

mount /dev/sda5 /boot

cd /boot

cp state.tgz /temp

cd /temp

tar -xf state.tgz

tar -xf local.tgz

rm *.tgz

cd etc

terminal window

The first batch of commands that need to be run to get to the shadow password file

We’re going to use vi to edit the shadow password file. Just move to the line starting with the root and delete the string between the first 2 colons. Use the [Delete] key. When done press [:] and type wq followed by [Enter].

use vi to edit the shadow password file

Delete the encrypted root password to reset it to null i.e. the root account will not have a set password

Continue by running the following batch of commands.

cd ..

tar -cf local.tgz etc/

tar -cf state.tgz local.tgz

mv state.tgz /boot

umount /boot

reboot

Step 4 – Once the ESXi host is back online, try logging in as root either from the DCUI (console) or via SSH using putty or similar. You should be able to log in without keying in a password although you will be reminded to set one which is what you should do.

How do I recover my root password?

Here’s a video demonstrating how to carry out the password recovery procedure from start to finish and reset the root password.

:0

Conclusion

There isn’t really much more to add other than to urge you to get into a habit of saving your passwords using a reliable password manager. While unsupported by VMware, the procedure of resetting a default ESXi root password outlined today works every time, at least on ESXi 6.x but it should also work with older releases. I have not come across any side-effects when using this hack for ESXi root recovery, understandably so, considering we’re simply zeroing out a hash value from a password file. Ever lost your password and was frozen out of ESXi? What did you do? Let me know in the comments below. And if you need any help about how to reset ESXi root password, I’m happy to help out.

The post How to Reset the ESXi Root Password appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/reset-esxi-root-password/feed/ 40
3 ESXi Backup Methods using the Command-Line https://www.altaro.com/vmware/esxi-backup-command-line/ https://www.altaro.com/vmware/esxi-backup-command-line/#comments Wed, 14 Mar 2018 12:46:37 +0000 https://www.altaro.com/vmware/?p=18438 Being in a position to perform an ESXi backup and restore to a known working configuration is always good. Learn how to do this using 3 simple command-line techniques.

The post 3 ESXi Backup Methods using the Command-Line appeared first on Altaro DOJO | VMware.

]]>

Performing regular ESXi backup configuration should be part of your data and configuration protection strategy. Doing this, allows you to quickly restore a host to a known working state using PowerCLI, vSphere CLI, or ESXi shell commands instead of going through the process of doing it manually.

In a couple of previous posts, namely The perils of enabling DirectPath I/O on ESXi and How to roll back to a previous ESXi version, I wrote about how the ESXi backup configuration is done automatically to a file called state.tgz at 1-hour intervals using a script triggered via a cron job.

The added content of this post is that it brings the ability to export state.tgz using one-liner commands for an even faster ESXi backup configuration. Better still, you can put together a script that backs up the configuration of one or more ESXi hosts. You can then automate the ESXi backup configuration process using either the Windows job scheduler or perhaps a cron job running on a Linux machine. The only issue with automating the ESXi backup configuration process is how one handles passwords, more so if the root password is different for every host or frequently changed as part of a company security policy.

Restoring an ESXi backup configuration is very easy as we are about to see.

Even though I’m targeting ESXi 6.5, the ESXi backup configuration procedure equally applies to earlier and later releases of ESXi going back to version 4.0! However, do refer to the ESXi release’s documentation to verify ESXi command line functionality before moving into production to backup and restore ESXi host configuration of course.

ESXi backup configuration vs VM backups

Before starting here, it is important to make the distinction between the operation of backup VMware ESXi virtual machines, which you can do with Altaro VM Backup, and backing up the vSphere host configuration.

The backup VMware ESXi virtual machines will ensure that the disks and configuration files of your virtual machines are safely stored on a repository from which you can restore the VM itself or individual files, while the latter will save you time pushing the configuration of a vSphere host after reinstalling it.

Note that there are other options to provision and backup ESXi hosts even faster-using features such as auto-deploy paired with vSphere host profile. Third-party vendors like DellEMC even offer software solutions to install and configure new hosts with a vCenter plugin like Dell EMC OpenManage Integration for VMware vCenter.

Using PowerCLI

If you don’t have PowerCLI installed, refer to our bible on the topic and download it. Once installed, make sure to run PowerCLI as an administrator.

Backup

To trigger the ESXi backup configuration, use the Get-VMHostFirmware cmdlet. In the example shown next, I backed up the ESXi configuration for host 192.168.29.10 to the c:\esxi_backups folder on my laptop. Note that you must first establish a connection to the host using the Connect-VIServer cmdlet.

Connect-VIServer 192.168.28.10 -user root -password <password>

Get-VMHostFirmware –vmhost 192.168.28.10 -BackupConfiguration -DestinationPath c:\esxi_backups

Using PowerCLI to backup the configuration of an ESXi host

Using PowerCLI to backup the configuration of an ESXi host

Restore

Restoring back the ESXi backup configuration from the backup file involves one extra step. The ESXi host must first be put in maintenance mode using the Set-VMHost cmdlet like so.

Set-VMHost -VMHost 192.168.28.10 -State Maintenance

Use the Set-VMHost PowerCLI cmdlet to put a host in maintenance mode

Use the Set-VMHost PowerCLI cmdlet to put a host in maintenance mode

When ESXi is in maintenance mode, use the Set-VMHostFirmware PowerCLI command to restore the host’s configuration from a previous ESXi backup configuration.

Note 1: The host will reboot automatically after running the command, so make sure to power off or migrate any VMs hosted on it!

Note 2: The build number and UUID of the host you’re restoring to, must match those backed up. The UUID check can be skipped by adding the -force parameter to the Set-VMHost cmdlet. The cmdlet fails to run if there’s a mismatch in the build number. Always make sure you restore to the correct host to safeguard the integrity of your environment, i.e., to avoid any mismatch with the esxi build numbers.

Set-VMHost -VMHost 192.168.28.10 -State Maintenance

Set-VMHostFirmware -vmhost 192.168.28.10 -Restore -SourcePath C:\esxi_backups\configBundle-192.168.28.10.tgz -HostUser root -HostPassword <password>

Restoring the ESXi backup configuration of a host using PowerCLI

Restoring the ESXi backup configuration of a host using PowerCLI

Using the ESX Command Line

ESXi’s vim-cmd command line utility allows you to backup and restore the ESXi backup configuration directly from the shell. To do this, enable SSH on the host and use putty to log in as root. Once you’re in, run the following two commands in the given order. You are given a URL which you’ll use to download the TGZ bundle from the host using a standard browser. Note that you need the replace the * character in the URL with the IP address of the ESXi host. I am not quite sure why the IP address of the host is not included from the start.

vim-cmd hostsvc/firmware/sync_config

vim-cmd hostsvc/firmware/backup_config

Using vim-cmd from the ESXi command line to start the esxi backup configuration

Using vim-cmd from the ESXi command line to start the ESXi backup configuration

Just like the PowerCLI method, you must first put the ESXi host in maintenance mode before you’re able to restore from a configuration backup file. To do this, we’re still using vim-cmd. You also need to copy the backup file to a folder on the ESXi host using something like WinSCP. The host will then reboot to complete the ESXi backup and restore process.

vim-cmd hostsvc/maintenance_mode_enter

vim-cmd hostsvc/firmware/restore_config /tmp/configBundle-esx-pn1.vsphere65.local.tgz

Using the vSphere CLI

Note that you shouldn’t really use the vSphere CLI anymore as VMware is phasing it out in favor of ESXCLI or the Perl SDK but this content is still very relevant to a number of folks out there!

The download of vSphere CLI is still available on the VMware website here. On Windows machines, CLI commands consisting of Perl script are run from the default location C:\Program Files (x86)\VMware\VMware vSphere CLI\bin. Since we’re dealing with Perl scripts, one must also install a Perl interpreter. The ones suggested by VMware can be downloaded from here or here.

vSphere CLI consists of a series of Perl scripts. A Perl interpreter needs to be installed prior to installing vSphere CLI

vSphere CLI consists of a series of Perl scripts. A Perl interpreter needs to be installed prior to installing vSphere CLI

For this post, I chose to download the Strawberry release. Note, after installing Strawberry, you must add c:\strawberry\c\bin to %path% on Windows to avoid running into missing DLL issues like what’s shown next. To this just open an administrative prompt and type path=%path%;c:\strawberry\c\bin. The change will only persist for the current ESXi command line session. If you want the path change to stick, go to System Properties and edit it by clicking on the Environment Variables button in Windows.

Perl installations require you to modify the path environment variable to include the DLL folder which would otherwise result in the type of error shown

Perl installations require you to modify the path environment variable to include the DLL folder which would otherwise result in the type of error shown

Modifying the path environment variable to include the path to the Perl’s interpreter DLL folder

Modifying the path environment variable to include the path to the Perl’s interpreter DLL folder

The Perl script we need to run is called vicfgcfgbackup. This is used to carry out both backup and restore operations. To start the ESXi backup configuration, follow the procedure described next.

  1. Open an administrative command prompt.
  2. Run cd \”Program Files (x86)\VMware\VMware vSphere CLI\bin”.
  3. Run vicfg-cfgbackup.pl –server=<host IP address> –username=root -s <backup filename>.
  4. Type in the ESXi’s root password.

Using vicfg-cfgbackup to back up an ESXi’s configuration

 

Using vicfg-cfgbackup to back up an ESXi’s configuration

To restore back from a specific ESXi backup configuration file, follow the same ESXi backup procedure only this time replacing the -s parameter with -l. The next screenshot illustrates the process. You’ll be warned that the host will reboot. Typing yes, completes the restore process.

vicfg-cfgbackup.pl –server=192.168.16.69 –username=root -l c:\esxi_backups\16_69_esxiconfig.tgz

The vicfg-cfgbackup command is used for both backups and restores

The vicfg-cfgbackup command is used for both backups and restores

To properly protect your VMware environment, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their VMware backup strategy.

To keep up to date with the latest VMware best practices, become a member of the VMware DOJO now (it’s free).

Conclusion

Ensuring a solid ESXi backup configuration pays in the long run. Even more so if you do this on a frequent basis. Scheduled backups allow you to restore to the latest known working ESXi backup configuration. You can easily learn about what’s being backed up and/or restored by uncompressing the ESXi backup configuration and examining the contents.

Keep uncompressing until you end up with the /etc folder and examine the contents. This will give you an idea of when a restore might come in handy. So, for instance, the passwd file hints at the possibility of recovering from a forgotten root password if you have a very recent ESXi backup.

ESXi backup.

Note that it is good to backup regularly, but it is also a good measure to test the backup and restore the ESXi host configuration process in its entirety by restoring a host every once in a while to ensure that it actually works.

That’s it for today. If you found this post useful or have any feedback on what’s been discussed, drop me a comment below.

The post 3 ESXi Backup Methods using the Command-Line appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/esxi-backup-command-line/feed/ 2
How to Roll Back to a Previous ESXi Version https://www.altaro.com/vmware/roll-back-previous-esxi/ https://www.altaro.com/vmware/roll-back-previous-esxi/#comments Tue, 13 Mar 2018 16:22:20 +0000 https://www.altaro.com/vmware/?p=18371 Being able to roll back to a previous ESXi version is a blessing when facing a failed upgrade or intermittent issues thereafter. Find out how in the walkthrough featuring screengrabs and video explanations

The post How to Roll Back to a Previous ESXi Version appeared first on Altaro DOJO | VMware.

]]>

Occasionally, you may need to revert to a previous version of ESXi. The reasons for this can vary from the infamous Purple Screen of Death to unexpected behavior resulting from unsupported hardware or misbehaving drivers. Here’s an example of the latter scenario. Your first impulse would probably be to apply a fix and hope for the best. Alternatively, you can revert to a previous ESXi installation, one you know was working fine before upgrading.

In today’s post, I’ll briefly explain what goes on at the disk partition level when ESXi is installed and upgraded at a later stage. This will give you an understanding as to why and how the version rollback procedure described works.

 

Installing ESXi


So, what I have in mind for this post is the following. I’m going to install a nested instance of ESXi 6.5. If you don’t know how to do this, have a look at the following posts.

Checking the ESXi version and build from console (DCUI)

Checking the ESXi version and build from the console (DCUI)

 

Once installed, we’ll take a peek at the disk partitions ESXi creates during the installation and then proceed to upgrade it to a later version. Finally, we’ll roll back to the original version.

 

A special kind of partition


I may have touched on ESXi partitions in past articles but I don’t recall going into any detail. For the purposes of this post, the one piece of information to cling to is the dual boot bank architecture ESXi implements for redundancy and recovery purposes.

These boot banks are nothing more than two disk partitions, each 250MB in size, called bootbank and altbootbank. In older versions of ESXi – I believe prior to ESXi – the partition size was set to 48MB. When ESXi is first installed, the bootbank partition is populated with the hypervisor core files and marked active. At this point, the altbootbank partition is empty.

Below is how the bootbank partition looks like when ESXi is first installed. The state.tgz file, I pointed out in the screenshot, is, in fact, a backup of the host’s state and configuration. The backup is performed every hour by running the /bin/auto-backup.sh script as a cron job.

The hypervisor files and configuration as listed under a boot bank paritition

The hypervisor files and configuration as listed under a boot bank partition

 

Here’s how the altbootbank partition looks like after a host is first installed. It’s empty save for a couple of files. However, when the host is upgraded, the partition is then populated with the files from the upgraded version and marked active. When the host is rebooted, the bootloader will then point to this partition instead of the previous one.

The altbootbank partition is empty right after ESXi is installed save for a couple of files

The altbootbank partition is empty right after ESXi is installed save for a couple of files

 

If you run something like grep build boot.cfg, you can determine which version of ESXi is present on the selected partition assuming it’s already been populated.

[root@localhost:/vmfs/volumes/10346f76-a8e6e4d2-dbc9-e1d8013ddca5] grep build boot.cfg
build=6.5.0-5969303

As mentioned, both partitions are sized 250MB. You can easily spot them by running df -h /vmfs/volumes from shell.

Two other commands you may find useful are esxcli storage core device partition showguid and partedUtil showGuids. The first command lists all the partitions present on the host’s primary disk or bootable medium. The second command maps partition types to a GUID. As can be seen from the next screenshot, bootbank and altbootbank correspond to partitions 5 and 6 respectively and are both of type Basic Data.

Shell commands will help you determine which partition is which

Shell commands will help you determine which partition is which

 

Likewise, you can use the ESXi host client or vSphere Web client to view the partitions on ESXi. The two boot bank partitions are the ones boxed in red in the next screenshot.

Using the host client to list partitions on a host

Using the host client to list partitions on a host

 

Upgrading ESXi


I will now upgrade the 6.5 instance just installed to ESXi 6.5 U1. I’ll be using the ISO method as shown in the next video. Since I’m upgrading a nested ESXi, all I need to do is attach the relevant ISO and make sure the VM boots off it. The same applies to upgrading a physical host except that the ISO image is now booted off a USB device or CD/DVD disc.

The alternative would be to use the esxcli software command to upgrade using an offline bundle as explained in the Patching and Upgrading ESXi using ESXCLI commands post.

At this point, we can verify that the altbootbank partition has been populated with the files from the upgrade. As you can see from the next screenshot, this is indeed the case.

 

The altbootbank partition is populated after an upgrade. The boot.cfg file helps you determine the ESXi version at this location

The altbootbank partition is populated after an upgrade. The boot.cfg file helps you determine the ESXi version at this location

 

You can also correlate the build number, listed at the bottom of the previous screenshot, with that on the ESXi versioning page.

Correlating the ESXi version and build number with VMware's online reference

Correlating the ESXi version and build number with VMware’s online reference

 

Rolling back to the previous version


Suppose that after upgrading, the host starts crashing intermittently. You need the host stable asap as you cannot afford additional unplanned downtime during this time of day. This is where having two boot banks comes in super handy.

To roll back to the previous ESXi version, reboot the host from DCUI or type reboot at the shell prompt. Below, I’ve listed the steps carried out using DCUI as per the video below.

  • Press [F12] while consoled (DCUI)
  • Type in the root password.
  • Press [F11] to reboot the host.
  • Quickly press [Shift-R] when the host starts booting.
  • At the VMware Hypervisor Recovery screen, press [Y] to roll back to the previous ESXi version.

The host should now be running the ESXi version installed prior to the upgrade. Note, that once this procedure is carried out, you won’t be able to go back to the upgraded version other than by installing it again.

 

Conclusion


Ideally, you won’t need to roll back ESXi anytime soon but it helps to know the option exists should you need it. One related topic I should definitely cover is backing up and restoring ESXi. So stay tuned for an eventual post on the subject. Speaking of backups and restores, did you know that vCSA 6.5 has an inbuilt backup and restore capability? How to natively backup vCSA to IIS using HTTPS explores how to backup the configuration of your vCenter Server appliance to a web server. You will find this and similar articles on our ever-growing VMware blog.

[the_ad id=”4738″][thrive_leads id=’18673′]

The post How to Roll Back to a Previous ESXi Version appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/roll-back-previous-esxi/feed/ 2
How to Add a Linux iSCSI Target to ESXi https://www.altaro.com/vmware/adding-linux-iscsi-target-esxi/ https://www.altaro.com/vmware/adding-linux-iscsi-target-esxi/#respond Fri, 09 Mar 2018 09:30:12 +0000 https://www.altaro.com/vmware/?p=18149 Using Linux as an iSCSI target server is a great way to add shared datastores in your vSphere environment. Jason's post explains how to do this the easy way.

The post How to Add a Linux iSCSI Target to ESXi appeared first on Altaro DOJO | VMware.

]]>

Today, I’m going to show you how to set up an iSCSI target server on a Linux box. The whole idea here is to mount an iSCSI LUN as a shared VMFS datastore on ESXi. This solution is great for testing or home lab environments. It’s a good compromise if you cannot afford, or don’t have access, to a proper SAN / NAS solution. You can, of course, use something like FreeNAS, OpenFiler or a Windows server as an alternative. This is something I explain in Of Storage, Protocols, and Datastores.

In our case, good old standard Linux will suffice. Without further ado, let’s begin.

 

What you’ll need


I’ve set up a VM, running Centos 7, hosted on ESXi 6.5 with the following specifications: 1 CPU, 4GB RAM, and two drives – 40GB and 50GB in size. You can use a physical Linux box if you wish since the outlined procedure works either way.

The secondary drive will be set up as an iSCSI LUN. This will be presented, via the iSCSI target service, to those ESXi hosts where it will be mounted as a VMFS datastore. As is often the case, some commands may differ depending on the Linux distro used. I always stick to CentOS just because it’s the distro I’m most familiar with.

 

Preparing the disk


The following is a step-by-step procedure on how to create and/or prepare a secondary disk on a Linux machine. This will serve as your iSCSI LUN. Have a look at How to Manage Disk Space on VMware Linux Virtual Machines if you want to learn more about the subject matter. Assuming your Linux machine is up and running, go ahead and carry out these steps. You can skip this section to the next if your machine is already adequately configured, as in having one with spare disk space that can be readily allocated.

Step 1 – Login as root

Step 2 – Run the following command to update your OS to the latest and install the targetcli package which will be needed later on

yum -y update
yum -y install targetcli

Step 3 – Add a second disk to the VM using a vSphere client. This step is optional if the Linux machine used has already been configured. If not, proceed as follows. We first need to determine the SCSI bus the disk is connected to and then trigger a scan using the echo … trick. In my case, the disk is attached to host2.

grep mpt /sys/class/scsi_host/host?/proc_name
echo "- - -" > /sys/class/scsi_host/host2/scan

Scanning for a new SCSI disk in Linux

 

Step 4 – Next, we need to create a partition on the newly added disk, this in my case being /dev/sdb. Again, this step is optional. I’ll be using the fdisk tool to create a primary partition on the disk as per the following procedure.

fdisk /dev/sdb
  • Press [n] and [Enter]
  • Press [p] and [Enter]
  • Press [1] and [Enter]
  • Press [Enter] to accept the default first sector.
  • Press [Enter] to accept the default last sector.
  • Press [w] and [Enter] to commit the changes to disk. You should now have a new partition called sdb1.
Creating a primary partition on the newly added disk

Creating a primary partition on the newly added disk

 

Step 5 – We have 2 choices here, either use the entire disk or use LVM. LVM is, give and take, the equivalent of dynamic disks in Windows, great if you want to create volumes that span multiple devices. I’ll be using LVM just to demonstrate the very basics.

Run the following commands to create an LVM logical volume. The lv-iscsi name assigned is completely arbitrary.

pvcreate /dev/sdb1
vgcreate vg-iscsi /dev/sdb1
lvcreate -l 100%FREE -n lv-iscsi vg-iscsi
Using LVM commands to create a physical volume, volume group and a logical volume

Using LVM commands to create a physical volume, volume group, and a logical volume

 

Setting up the iSCSI target


targetcli is a command-line object-oriented utility. It admittedly takes some time to get used to and to be completely honest, I have only used it a couple of times to date. I find it easier, and less error-prone, to set up iSCSI targets using some form of UI. For further details, have a look at this site.

Anyway, here are the commands you need to type in.

targetcli
cd /backstores/block
create scsids1 /dev/vg-iscsi/lv-iscsi
cd /iscsi
create iqn.2017-11.local.centos7:disk1
cd iqn.2017-11.local.centos7:disk1/tpg1/acls
create iqn.2017-11.local.centos7:node1
cd ..
set attribute authentication=0 demo_mode_write_protect=0
set attribute generate_node_acls=1
cd luns
create /backstores/block/scsids1
cd /
saveconfig
exit
The targetcli utility is a command-line driven tool used to create and configure iSCSI target servers

The targetcli utility is a command-line driven tool used to create and configure iSCSI target servers

 

The command labeled 9 in the screenshot above, should read set attribute authentication=0 demo_mode_write_protect=0. When I omitted the demo_mode_write_protect=0 bit, ESXi failed to mount the datastore throwing a Cannot change the host configuration error as shown next. I ran across the same issue when trying to mount the LUN on Windows via the iSCSI initiator client.

The LUN will fail to mount if it is write-protected

The LUN will fail to mount if it is write-protected

 

Mounting the iSCSI LUN as a datastore on ESXi


The following procedure can be executed on both vCenter managed and standalone hosts. You can use any of the vSphere clients but, personally, I prefer using the vSphere Web client.

Step 1 – Configure the iSCSI Software Adapter

If not already included, you must add the iSCSI Software Adapter first by clicking on the green plus sign (4) on the Storage Adapters (3) page under the Configure tab (3) after selecting the host on which the LUN will be mounted(1).

Step 2 – Add and configure the iSCSI Target

Once the adapter is added, highlight it (5) and click on the Targets tab (6). Click on the Add button (6) on the Dynamic Discovery page.

Adding and configuring the iSCSI Software Adapter on ESXi

Adding and configuring the iSCSI Software Adapter on ESXi

 

In the iSCSI Server field, type in the IP address of the CentOS machine. Leave the port set to 3260 unless you changed this on the iSCSI target server. Press OK to continue.

Adding the iSCSI target server to the iSCSI Software Adapter list of targets

Adding the iSCSI target server to the iSCSI Software Adapter list of targets

 

Remain on the same screen and click on Rescan all storage adapters icon to have ESXi detect the iSCSI LUN. On the next screen, leave both options enabled and press OK.

 

Moving on to the Storage Devices page, you should see the LUN listed as a new disk.

 

Step 3 – Mount the datastore.

Click on the Datastores page and then click on the Create a new datastore icon. Follow the instructions presented by the New Datastore wizard.

 

First, we need to specify the type of datastore we want to be created. Since we’re after a shared iSCSI datastore, VMFS is the option we need to use. Select it and press Next to continue.

 

Type a name for the datastore (1) and click on the disk, corresponding to the LUN, from the bottom pane. In the next screenshot, you can see I only have one disk listed which is automatically selected. Press Next to continue.

 

Choose the VMFS version you want the LUN initialized to. Note that VMFS 6 can only be accessed by ESXi 6.5 hosts. Press Next to continue.

 

Review the partition information and set the Datastore Size if you don’t wish to use the whole disk. The rest of the settings cannot be changed, at least not in this case. Press Finish to create the datastore.

 

You should now see the datastore listed under Datastores along with any other previously created.

 

Once the datastore is created and initialized, you can easily mount it on other hosts by configuring the iSCSI Software Adapter and performing a scan. There is no need to go through the whole datastore creation procedure as ESXi detects that the LUN has already been initialized and mounted elsewhere as shown in the video below.

 

Conclusion


Once you get the hang of it, you can quickly create iSCSI LUNS on Linux and present them to ESXi as datastores. I did mention that this solution is a good compromise to alternatives such as FreeNAS or OpenFiler and perhaps less so to enterprise-level offerings from Netapp, HP, and other major players. Having said that, I would love to hear your thoughts on using similar setups in production environments especially in terms of performance and reliability, bearing in mind that most SDS solutions having Linux running at the core.

If you liked this post, why not have a look at our ever-growing list of VMware related posts?

Finally, as always, if you’ve got any feedback on the information presented here or if you’re having trouble setting up Linux iSCSI target to ESXi let me know in the comments below and I’m happy to lend a hand as soon as possible.

The post How to Add a Linux iSCSI Target to ESXi appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/adding-linux-iscsi-target-esxi/feed/ 0
How To Monitor Your vSphere VMs On The Go https://www.altaro.com/vmware/mobile-vsphere/ https://www.altaro.com/vmware/mobile-vsphere/#comments Wed, 14 Feb 2018 18:47:09 +0000 https://www.altaro.com/vmware/?p=18216 vSphere Mobile Watchlist is a mobile application that allows the user to manage and monitor your hosts and virtual machines while on the go. Sounds great, but does it live up to the promise?

The post How To Monitor Your vSphere VMs On The Go appeared first on Altaro DOJO | VMware.

]]>

Managing your infrastructure remotely, and on the go, is now so ubiquitous that being unable to do so puts you, or your organization, at a competitive disadvantage. From meeting SLA response times to providing after-hours support, being able to work from anywhere is now a pressing necessity. There were times when I used to carry along a laptop equipped with an Internet dongle. When the dreaded call came, I would VPN to the office and work on whatever pressing issue was at hand.

However, technology has advanced and now most common mobile devices such as smartphones and tablets, are well-suited to do this task. Of course, there are still limitations, most notably screen real estate, that could otherwise be easily accomplished using a laptop or devices with larger screens. Think about having to use a vSphere client on a 5″ screen. Nah, I don’t think so.

Still, someone must have thought about designing a well-thought-out mobile app, something you could use to manage vSphere resources without driving you nuts. With that in mind, I did some research, ahem, typed in VMware on my Android phone, (sorry, not an iPhone or Windows fan) and this is what I got.

VMware apps on play store

VMware apps on play store

 

It turns out there’s quite a number of VMware mobile apps available for download. So, for today’s post, I’ve chosen this one app called vSphere Mobile Watchlist. My goal is to assess its functionality and determine whether it’s worth adding to your bag of tools.

To test it out, I used an Android emulator running on Windows as well as my phone. The emulator allows me to take screenshots quicker but I’m still using a smartphone to make sure I don’t miss any gesture-driven functionality.

 

Other Available VMware Mobile Apps


There are third party tools worth looking into such as those from Infradog. Depending on the platform you’re running on, here are the links to the various apps stores listed in no order of preference.

I’ve also listed some of the VMware mobile apps and supported platforms as follows. For a complete description, refer to the app store download location.

list of vmware mobile apps

VMware Mobile Apps

 

The vSphere Mobile Watchlist App


Let me first quote the app’s description.

vSphere Mobile Watchlist enables secure vSphere infrastructure monitoring and remediation directly from your smartphone. With Watchlist, VMware administrators will be able to log in to a vCenter Server or ESXi host directly and choose virtual machines and hosts from inventory to create targeted views of objects and their properties. Remediate directly from the device with power and management operations, and delegation of tasks to onsite colleagues with linked relevant Knowledge Base (KB) articles.

The one thing that immediately caught my eye is the “Remediate directly from the device with power and management operations“. Fantastic, if it really works as advertised.

 

Installing the app

As mentioned already, besides my Android phone, I’m also using an Android emulator. I picked one at random from the list suggested by Google. It’s called MEmu and that’s as far as my review of that goes. Though I must admit that from first impressions, it seems to do a great job at emulating the popular OS on Windows. Here it is running on Windows 10.

An Android emulator running on Windows 10

An Android emulator running on Windows 10

 

As with all Android apps, installing Watchlist is a simple matter of heading to Play Store, typing in vSphere Mobile Watchlist and clicking on Install.

Installing vSphere Mobile Watchlist from Play Store

Installing vSphere Mobile Watchlist from Play Store

 

After you install the software, press either Open or the icon that’s created on the Home screen. Accept the EULA as is customary with all software installations.

 

Type in the IP address or FQDN of an ESXi host or vCenter Server you want to connect to, along with the credentials.

Select the ESXi host or vCenter Server you want to connect to

Select the ESXi host or vCenter Server you want to connect to

 

When logging in for the first time, it is highly likely that you are presented with an SSL warning. If that’s the case, press Settings, and enable the Trust on First Use option and press Done. This takes you back to the start where you can press Login to continue.

SSL certificate warning first time you log in. Enabling the Trust on First Use option fixes the issue.

SSL certificate warning first time you log in. Enabling the Trust on First Use option fixes the issue.

 

A second SSL warning may also be given if the IP address, or FQDN, used does not match the one specified in the host’s or vCenter Server’s certificate. You can safely press OK to connect.

Certificate name mismatch warning

Certificate name mismatch warning

 

And we’re in, finally.

The vSphere Mobile WatchList application's main screen

The vSphere Mobile WatchList application’s main screen

 

You’ll find a number of menu items on the main screen. WatchLists is the one item you’ll be using the most, the use of which is explained in the following section. Alerts and Tasks bring together all the event data pertaining to the ESXi host or vCenter Server, the app is currently connected to. Under Settings, you’ll only find a few Security and Data related items. As for Bookmarks, I couldn’t find how to use it but I suspect it’s akin to a favorites tab.

The tasks history of a vCenter Server the app is connected to

The tasks history of a vCenter Server the app is connected to

 

Watchlists. What about them?

On the main screen, you will see two options called Virtual Machines and Hosts. Pressing on either one, allows you to add VMs, or ESXi hosts, to the default watchlist which is initially empty. As an example, I connected to vCenter Server and proceeded to populate the default watchlist with a few VMs I wish to monitor and manage remotely.

To do this, simply press on Virtual Machines and select the VMs you want to be included on the watchlist.  Pressing Select All adds all the VMs listed on the watchlist. Pressing Back takes you to the main screen.

Adding VMs to the default watchlist

Adding VMs to the default watchlist

 

The process of creating new watchlists is not exactly intuitive. From a UI perspective, the process could be better laid out. Regardless, the procedure to add a new watchlist is as follows. Press the Watchlist drop-down box (1) followed by Edit (2). Then, press the + icon (3), type in a name for the new watchlist (4) –  I’ve named mine ESXi Hosts – and press Save (5). When done, go back and select Virtual Machines or Hosts. Select the items you want to be included on the new watchlist.

Creating a new watchlist

Creating a new watchlist

 

Here’s a VM populated watchlist as displayed on my phone.

A watchlist containing VM objects

A watchlist containing VM objects

 

The next video, documents the whole shebang, from installing the app to creating and populating a watchlist.

 

Monitoring and Managing hosts and VMs


Once your watchlists are up and running, there are a few things you could do. Straight off the bat, you can visualize resource utilization for both VMs and ESXi hosts. The three bars under every watchlist item, from top to bottom, correspond to the CPU, memory, and storage currently in use by the resource. Pressing on the selected item reveals more details as per the next screenshot.

Detailed information about a VM included in a watchlist

Detailed information about a VM included in a watchlist

 

Resource utilization values and other information such as guest OS type and assigned IP addresses are displayed first. Scrolling down brings up a list of objects related to the VM or host. These might include the parent host, cluster and network information, the datastore name and a list of other VMs hosted on the same ESXi host.

Swiping to the left reveals the remote console screen with a screenshot of the VM’s console displayed in the middle. Better still, you can remote connect to a VM via the Launch Console icon.

Remote console screen

Remote console screen

 

Another left swipe takes you the management tasks screen. Here, you can power up a VM or manage snapshots not to mention other functionality. The same functionality, minus the snapshots, is available when you right-swipe a watchlist item as shown in the second screenshot below. In both cases, you are prompted to confirm the selected action.

VM management actions

VM management actions

 

If the selected VM is powered down, you have the option to change the allocated CPU and Memory values.

Change the CPU and Memory values for a powered down virtual machine

Change the CPU and Memory values for a powered down virtual machine

 

Left-swiping to the last screen gives you access to the Tasks and Events associated with the selected resource, a great help when troubleshooting. Filters can be set on both Tasks and Events by pressing the respective icon at the top of the screen.

Task and Events for a given resource. Filters for both can be applied via icons at the top of the screen.

Task and Events for a given resource. Filters for both can be applied via icons at the top of the screen.

 

Give or take, you’ll find the same screen layout and functionality when selecting ESXi hosts. The minor differences are a lack of a remote console screen and the addition of Sensor Data. The is shown in the last screenshot below. You can also put a host in maintenance mode, disconnect it from vCenter Server, reboot it, and also shut it down if required.

Monitoring and functionality screens for ESXi resources

Monitoring and functionality screens for ESXi resources

 

Resource utilization graphs for CPU and Memory, Disk, and Network are accessible for both VMs and ESXi hosts via the Resources icon.

Resource utilization graphs are available for both VM and ESXi resources

Resource utilization graphs are available for both VM and ESXi resources

 

Conclusion


In my opinion, this is a great app to have especially if your job description includes outside of office hours support. VPN is probably the most secure option when connecting to your vSphere environment over the Internet. An alternative is to use port forwarding to redirect traffic to your vCenter Server and/or ESXi hosts without the need to set up a VPN or make your servers Internet facing.

Note I haven’t tested how the app performs when connected to the Internet, so I’m leaving this as an exercise to you, the reader, to test it out. Feel free to leave comments on your finding if you do. In the meantime, do have a look around on this blog for more interesting VMware related posts.

[the_ad id=”4738″][the_ad id=”4796″]

The post How To Monitor Your vSphere VMs On The Go appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/mobile-vsphere/feed/ 5
Why Enabling DirectPath I/O on ESXi can be a Bad Idea https://www.altaro.com/vmware/directpath-io-esxi/ https://www.altaro.com/vmware/directpath-io-esxi/#comments Wed, 07 Feb 2018 16:03:13 +0000 https://www.altaro.com/vmware/?p=18024 DirectPath I/O gives VMs direct access to the underlying hardware. This post explores how to recover from failure when this feature is used unwisely.

The post Why Enabling DirectPath I/O on ESXi can be a Bad Idea appeared first on Altaro DOJO | VMware.

]]>

I was recently toying with the idea of setting up a FreeNAS virtual machine on my ESXi 6.5 host. This would give me an alternative shared storage resource to add to my testing environment. With that in mind, I sat down and started looking around for recommendations and what not, on how to accomplish the task. Many of the posts I came across, recommended to go for passthrough, or passthru, technically known as DirectPath I/O. My original thought was to use RDM or a plain old vmdk and attach it to the FreeNAS VM.

The problem with the latter solution is mainly one of performance regardless of this being a testing environment. With DirectPath enabled, I can give the FreeNAS VM direct access to a couple of unused drives installed on the host. So, without giving it further thought, I went ahead and enabled pass-through on the storage controller on my testing ESXi host.

And here comes the big disclaimer. DO NOT DO THIS ON A LIVE SYSTEM. You have been warned!

This post exists for the sole reason of helping some poor soul, out there, to recover from what turned out to be a silly oversight. More to come on this.

 

What went wrong?


The physical host I chose for my grandiose plan, has a single IBEX SATA controller with 3 disks attached to it. Nothing fancy. It’s just software emulated RAID which ESXi doesn’t even recognize hence the non-utilized drives. I’m only using one drive which I’ve also used to install ESXi on. Remember, this is a testing environment. On this one drive, there’s also the default local datastore and a couple of VMs.

The drive, by default, is split into a number of partitions during the ESXi installation. Two of the partitions, called bootbank and altbootbank, hold the ESXi binaries that are loaded into memory during the booting process. I’ll come back to this later.

 

Back to the business of enabling DirectPath I/O on a PCI card such as a storage controller. In the next screenshot, I’ve highlighted the ESXi host in question and labeled the steps taken to enable DirectPath for a PCI device. This is enabled by simply ticking a box next to the device name. In this case, I enabled pass-through on the Ibex Peak SATA controller. Keep in mind that this is my one and only storage controller.

 

After enabling DirectPath , you are reminded that the host must be rebooted for the change to take root.

 

Again, without giving it much thought, I rebooted the host. Five minutes later and the host was back on line. I proceeded to create the new VM for FreeNAS but I quickly found out that I no longer had a datastore where to create it. And it suddenly dawned on me! With DirectPath enabled, the hypervisor will no longer have access to the device. Darn!

And that’s when I ended up with a bunch of inaccessible VMs on a datastore that was no more.

At this point, panic would have quickly ensued if this were a live system but thankfully it was not. I started digging around in vSphere client, and one of first things noticed was that the Ibex storage adapter was not listed under Storage Adapters irrespective of the number of rescans carried out.

 

The logical solution that first came to mind was, disable DirectPath I/O on the storage controller, reboot the host and restore everything to its former glory. So I did, rebooted the host and  waited for another 5 minutes only to discover the host was still missing the datastore and storage adapter.

 

The Fix


The first fix I found on the net is the one described in KB1022011. Basically, you SSH to ESXi and inspect /etc/vmware/esx.conf looking for the line where a device’s owner value is set to passthru. This must be changed to vmkernel to disable pass-through. To verify the device is the correct one, look at the /device value and compare it to the Device ID value in vSphere Web client; Configure -> PCI Devices -> Edit.

In this example, the device ID, for the storage controller, is 3b25 meaning I know I’m changing the correct one. You can see this in the next screenshot.

 

To replace the entry, use vi, substituting passthru with vmkernel, which is what I did. I rebooted the host a third time and much to my annoyance it was still missing the datastore. After some more digging around, I came across these two links; The wrong way to use VMware DirectPath and KB2043048.

The first link is what helped me fix the issue and also provided me with the inspiration to write this post, so kudos goes to the author.

 

So, why did the fix fail?


The simple explanation is that the changes done, given my setup, won’t persist because everything is running off volatile memory, in RAM that is. This means that once ESXi is rebooted, the change done to esx.conf is lost too and it’s back to square one. During the booting process, the ESXi binaries are loaded from partition sda5 (bootbank) or sda6 (altbootbank) depending on which one is currently marked active.

The host’s configuration files are automatically and periodically backed up via script to an archive called state.tgz which is copied to both partitions depending on which one is active at the time. This backup mechanism is what allows you to revert to a previous state using Shift-R while ESXi is booting. Unfortunately, in my case, the pass-through change was backed up as well and copied to both partitions or so it seemed.

ESXi has full visibility of the drive while booting up, which explains why it manages to in the first place despite the pass-through setting. It is only when esx.conf is read, that the pass-through setting is enforced/

Reverting to a previous state using Shift-R did not work for me, so I went down the GParted route.

 

The GParted Fix


GParted, GNOME Partition Editor, is a free Linux-based disk management utility that has saved my skin on countless occasions. You can create a GParted bootable USB stick by downloading the ISO from here and using Rufus.

As mentioned already, we need to change the passthru setting in the backed up esx.conf file found in the state.tgz archive. To do this, we must first uncompress state.tgz which contains a second archive, local.tgz.

Uncompressing local.tgz yields /etc folder where we find esx.conf. Once there, open esx.conf in an editorFind and change the passthru entry and compress the /etc folder back to state.tgz. 

Finally we overwrite the original state.tgz files under /sda5 and /sda6 and reboot the host.

Here’s the whole procedure in step form.

Step 1 – Boot the ESXi server off the GParted USB stick. Once it’s up, open a terminal window.

 

Step 2 – Run the following commands.

cd /
mkdir /mnt/hd1 /mnt/hd2 /temp
mount -t vfat /dev/sd5 /mnt/hd1
mount -t vfat /dev/sd6 /mnt/hd2

 

Step 3 – Determine which state.tgz file is most current. Run the following commands and look at the time stamps. You’ll need to uncompress the most recent one to keep using the current host configuration save for the changes we’ll be making.

ls -l /mnt/hd1/state.tgz
ls -l /mnt/hd2/state.tgz

Here I’m showing the file while SSH’ed to the host

 

Step 4 – Copy the most current state.tgz to /temp. Here, I’ve assumed that the most current file is the one under /mnt/hd1. Yours could be different.

cp /mnt/hd1/state.tgz /temp

 

Step 5 – Uncompress state.tgz and the resulting local.tgz using tar. You should end up with an etc directory as shown.

cd /temp
tar -xf state.tgz
tar -xf local.tgz

 

Step 6 – Navigate to /temp/etc/vmware and open esx.conf in vi.

  • Press [/] and type passthru followed by Enter. This takes you to the line you need to edit assuming passthru has been enabled just for one device. If not, make sure the device ID matches as explained above.
  • Press [Insert] and change the value to vmkernel.
  • Press [ESC], [:] and type wq.
  • Press [Enter] to save the changes.

 

Step 7 – Compress the archive and copy it back to /mnt/hd1 and /mnt/hd2.

cd /temp
rm *.tgz
tar -cf local.tgz etc
tar -cf state.tgz local.tgz
cp state.tgz /mnt/hd1
cp state.tgz /mnt/hd2

 

Reboot the host and your datastore and virtual machines should spring back to life. The procedure worked for me. The storage adapter, datastore and VM all came back to life with no apparent issues.

ibex controller

 

Conclusion


The moral of the story is to test, test and test before replicating something on a production system. Reading the manual and doing some research first goes a long way in avoiding similar situations. The correct solution of course, in this case, would have been to install a 2nd storage controller and enable DirectPath on it. The optimal solution would be to install FreeNas on proper hardware instead of virtualizing it, something I haven’t written about but might cover in a future post. In the meantime, have a look at the complete list of VMware posts on this blog for more interesting topics to learn from.

[the_ad id=”4738″][the_ad id=”4796″]

The post Why Enabling DirectPath I/O on ESXi can be a Bad Idea appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/directpath-io-esxi/feed/ 1
How to Customize an ESXi image using VMware Image Builder https://www.altaro.com/vmware/esxi-image-vmware-image-builder/ https://www.altaro.com/vmware/esxi-image-vmware-image-builder/#comments Wed, 31 Jan 2018 15:19:38 +0000 https://www.altaro.com/vmware/?p=17758 Customizing ESXi images is crucial when it comes to provisioning VMware's hypervisor. Learn how to use VMware Image Builder to facilitate this task.

The post How to Customize an ESXi image using VMware Image Builder appeared first on Altaro DOJO | VMware.

]]>

Starting with vSphere 6.5, it is now possible to customize ESXi ISO images for deployment using Auto Deploy or any other provisioning method. Consider the case where you need to deploy ESXi to 10 new servers. The ideal course of action is to take your existing ESXi 6.5 ISO image and add to it the latest patches, updates, and drivers released to date. The alternative is to install the basic ESXi and update each host individually at a later stage using Update Manager or manually.

We like to do things the easy way, so in today’s post, I’ll show how to use the Image Builder tool to customize your ESXi image. Since I mentioned Auto Deploy, do have a look at the Testing ESXi Auto Deploy in a nested environment post if you’re interested in learning how to test this feature.

 

What you’ll need


The Image Builder component is part of vCenter Server 6.5. The tool is accessed via the vSphere Web client. You’ll also be needing an ESXi 6.5 ISO image and this conversion tool, the use of which I’ll explain later in the post.

 

Enabling Image Builder


To start using Image Builder, and Auto Deploy for that matter, we need to enable the Auto Deploy and ImageBuilder services. Using the vSphere Web client, select Administration (2) from the Home (1) menu. Click on System Configuration and then select Services. Select both services (4), one at a time, and click on the play button (5) to start them.

Starting the Auto Deploy and ImageBuilder services in vSphere Web Client

Starting the Auto Deploy and ImageBuilder services in vSphere Web Client

 

Once you make sure that both services are running, log off and log in back again using the vSphere client. You should now see the Auto Deploy icon displayed on the Home screen.

 

Using VMware Image Builder


Before moving, let me go over the various components we will be working with while using Image Builder.

  • VIB – this is the software packaging format used by VMware and 3rd party providers that write software for ESXi.
  • Image Profile – a logical container representing an ESXi image ultimately consisting of a base and other VIBs.
  • Software Depot – A collection of image profiles and VIBs which are accessible offline as a ZIP archive or online via an HTTP URL.

Note: Have a look at How to create persistent firewall rules on ESXi to learn how to create your own VIBs.

I’ll be working with all three components as per the following steps. The reiterate, the goal here is to create a customized ESXi ISO image containing the latest patches and a custom firewall rule that is required by the Altaro Backup software.

Using vSphere Web client, go to the Home screen and click on the Auto Deploy icon.

The Auto Deploy icon becomes visible once you start the required services and log in back in vSphere Web client

The Auto Deploy icon becomes visible once you start the required services and log in back in vSphere Web client

 

Step 1 – Create a custom depot

Switch to the Software Depots tab and click on the Add Software Depot button.

Creating a custom software depot in image builder

Creating a custom software depot in image builder

 

Step 2 – Upload an ESXi image to the custom depot

For this post, I downloaded the ESXi 6.5 U1 Express Patch 4 which contains the full ESXi hypervisor image once extracted.

Downloading an ESXi patch from my.vmware.com

Downloading an ESXi patch from my.vmware.com

 

To upload the ESXi image (patch in this case) to the custom depot, click on the Import Software Depot icon (1) and navigate to the folder containing the image (2). Press Upload to write the image to the depot.

Uploading an ESXi image / patch to a custom software depot in Image Builder

Uploading an ESXi image / patch to a custom software depot in Image Builder

 

Wait for the upload process to complete.

Image / patch upload progress in Image Builder

Image / patch upload progress in Image Builder

 

Step 3 – Preparing custom VIBs and drivers

I want to add the custom firewall rule I covered in the How to create persistent firewall rules on ESXi post. The issue you’ll be faced with is that you cannot upload VIBs directly to Image Builder without first converting them to a purposely crafted ZIP file. A quick Google search led me to the VIB2ZIP utility which you can download here.

Extract the downloaded utility. Use an administrative command shell to run vib2zip.cmd. Browse to the VIB’s location and select it as source (1). Select the location where the zip file (offline bundle) is created. Metadata, where applicable, is pulled from the VIB by clicking on Load from VIB. This is used to populate the remaining fields. I had to add Altaro to the Vendor Code field, this being a mandatory value. Once you do this, press Run to generate the zip file.

Further instructions and examples are available on the author’s website.

The vib2zip utility is used to convert VIBs to the appropriate ZIP format for use with Image Builder

The vib2zip utility is used to convert VIBs to the appropriate ZIP format for use with Image Builder

 

In general, vendor drivers are readily bundled in the correct zip format meaning you can immediately upload them to a software depot. This renders the conversion step optional in most cases but is nevertheless worth mentioning. Here’s an example of a bundled Dell driver for ESXi.

ESXi driver offline bundle

ESXi driver offline bundle

 

Step 4 – Upload the image and VIBs / drivers to the depot

Click on the Import Software Depot icon (1) to upload drivers / VIBs you wish to include in the customized ESXi image. Navigate to the folder (2) where the zip file is located using the Browse button, Type in a name (3) for it and press Upload (4). As per the next screenshot, I’m uploading the firewall rule VIB which I converted to a zip archive in the previous step.

Uploading a custom VIB / driver in Image Builder

Uploading a custom VIB / driver in Image Builder

 

Step 5 – Building the custom ESXi image

At this point, we have everything we need to put together the custom ESXi image. We continue by selecting the custom software depot created in Step 1 and adding an image profile to it. The image profile is extracted from the ESXi 6.5 U1 Express Update 4 patch uploaded by way of step 2.

Switch to the Image Profiles tab and click on the New Image Profile button.

Adding an image profile to a custom depot

Adding an image profile to a custom depot

 

The fields on the next screen are self-explanatory and mandatory save for Description. It is important that you select the correct custom depot from the drop-down box where you want the image profile created. This applies only when you have multiple custom depots created. Press Next.

Setting the details for the image profile

Setting the details for the image profile

 

We next define what we want included, or excluded, from the custom ESXi image.

Setting the Software Depot value to All Depot (1) ensures that any uploaded software packages are visible to the user. In the next screenshot, you can see the Altaro firewall rule displayed alongside the VIBs comprising ESXi. Tick the box next to each software package (2) you want included in the custom ESXi image. There’s no select all option, so you must tick every single one you want included. Bummer, I know! Press Next.

Note: The Acceptance Level (3) value should match the least privileged setting displayed for a software package, Community Supported in this case. From the testing I carried out, setting it to any other value will generally result in errors or failed ISO exports.

Ideally, you should have any custom VIB signed to the correct level. You can force the acceptance level on ESXi via the esxcli software command but be aware that this will put ESXi in an inconsistent state and not eligible for VMware support.

Selecting the software packages to include in the custom ESXi image

Selecting the software packages to include in the image profile

 

Press Finish to complete the image profile creation process.

Completing the image profile creation process

Completing the image profile creation process

 

Once the image profile has been created, you have several options to choose from amongst which to export the image profile as a bootable ISO or zip file, which is what we’re after.

Tasks that can be performed on a image profile

Tasks that can be performed on a image profile

 

Step 6 – Creating a bootable ESXi ISO image

Click on the custom software depot (1) created in step 1 and select the Image Profiles tab (2). Highlight the image profile (3) you wish to export and click on the Export Image Profile button (4). In the dialog box presented , select the type of format you want the profile exported to.

In our case, we select the ISO option (5) and click on the Generate Image (6) button. This will create the customized and bootable ESXi ISO image for us.

Generating a bootable ESXi ISO image from an image profile

Generating a bootable ESXi ISO image from an image profile

 

If the export fails, tick on the Skip acceptance level checking option.

An error is generated if the acceptance level on a package is not correctly set

An error is generated if the acceptance level on a package is not correctly set

 

If the export is successful, use the link provided to download the customized ESXi ISO image.

Download the customized ESXi ISO image to a local folder

Download the customized ESXi ISO image to a local folder

 

Testing it


I’m going to installed ESXi as a VM on Workstation Pro to verify that the ISO is valid. The things I’ll be looking for, apart from being able to boot from the ISO, are the build version and the inclusion of the firewall rule.

The image boots fine. Notice how the ESXi installer is modified to reflect the image profile used to create the ISO image.

Booting ESXi using the customized image

Booting ESXi using the customized image

 

The build number 6765664 is also correct as it corresponds to ESXi 6.5 U1 Express Patch 4, something you can verify here.

Checking an ESXi's host build number

Checking an ESXi’s host build number

 

Lastly, I’ll check if the firewall rule has indeed been added to the list and enabled.

An ESXi firewall rule included in a custom ESXi image is correctly loaded and enabled

An ESXi firewall rule included in a custom ESXi image is correctly loaded and enabled

 

Conclusion


Being in a position to customize ESXi images is important for a number of reasons. These range from ensuring that patches and updates persist after a stateless ESXi host is rebooted to being able to install ESXi on hardware for which no vendor-modified image exists. If you want to learn about the various ESXi 6.5 provisioning methods, do have a look at Deploying vSphere ESXi 6.5.

The post How to Customize an ESXi image using VMware Image Builder appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/esxi-image-vmware-image-builder/feed/ 1
How to deploy Dell OpenManage server components for VMware https://www.altaro.com/vmware/dell-openmanage-server-components-vmware/ https://www.altaro.com/vmware/dell-openmanage-server-components-vmware/#comments Mon, 29 Jan 2018 15:45:12 +0000 https://www.altaro.com/vmware/?p=17983 The OMSA Web Server and OpenManage Integration for VMware vCenter appliance allows you to manage your Dell server via the vSphere Web client respectively

The post How to deploy Dell OpenManage server components for VMware appeared first on Altaro DOJO | VMware.

]]>

Welcome to the second part of the series covering Dell’s OpenManage suite for VMware. In How to install Dell OpenManage Server Administrator on ESXi, I showed you how to install the Dell OMSA VIB on ESXi instances running on Dell servers. The VIB alone achieves very little unless you are able to interact with it. That’s where the OpenManage Web server and the OpenManage Integration for VMware vCenter appliance come into play.

Without further ado, let’s start off by covering the process of installing the OMSA Web Server on a Windows box.

 

Deploying the OMSA Web Server


Download the latest edition from here. Once you do, run the downloaded executable. By default, it will deflate the archive contents to c:\openmanage.

OMSA Web Server self-extracting installer

OMSA Web Server self-extracting installer

 

Now, navigate to C:\OpenManage\docs\install_guide\Windows\en and run index.html. The software’s documentation is displayed in a browser window. Pay some attention to the system requirements even though the installer will tell you if and where you fall short.

OMSA Web Server HTML-based documentation

OMSA Web Server HTML-based documentation

 

Next, navigate to C:\OpenManage\windows and run setup.exe. The installer runs a check to verify that all the system requirements are met. Click on Install Server Administration to proceed or take remedial action by installing any missing requirements.

System requirements and dependency checker

System requirements and dependency checker

 

Follow the self-explanatory wizard to finish installing the web server. Once the installation completes, navigate to https://<OMSA Web Server IP address>:1311.

On the login form, type the hostname or IP address of an ESXi instance running the OMSA VIB. Supply the root credentials or equivalent ones. Tick on the Ignore certificate warnings since the login attempt may fail otherwise.

Use the ESXi host credentials to access OpenManage nodes, that is hosts running the OpenManage VIB

Use the ESXi host credentials to access OpenManage nodes, that is hosts running the OpenManage VIB

 

As can be seen in the next video, a wealth of information about your Dell server is exposed via an intuitive UI which can also be used, to a lesser extent, to alter system settings.


 

Deploying the OpenManage Integration for VMware vCenter appliance


This appliance needs vCenter Server installed and necessitates of the vSphere Web client for deployment. The functionality provided, as is the ability to update the firmware, goes beyond what’s available in OMSA Web server. The installation is also more complex as I discuss next.

First off, download the latest edition of the appliance from here. This version provides support for vSphere 6.5 U1 which is what I have installed for this post.

Next, using a Windows machine, extract the download ZIP file and run Dell_OpenManage_Integration.exe as administrator. Run the wizard and follow the instructions as per the set of screenshots shown next. The sequence of steps and clocks, all redundant if I may add, is Next  -> Accept EULA -> Next -> Install -> Finish.

The self-extracting installer for OM Intergration for VMware vCenter

The self-extracting installer for OM Intergration for VMware vCenter

 

The installer simply extracts the OVF contents to the parent folder. You will then deploy the Integration appliance to a vCenter Server as per standard procedure.

The 3 self-extracted components comprising the appliance

The 3 self-extracted components comprising the appliance

 

Using the vSphere Web client, right-click on a host you want the appliance deployed to and select Deploy OVF template.

Deploying the appliance using the vSphere Web client

Deploying the appliance using the vSphere Web client

 

Click on Browse and navigate to the folder containing the OVF contents. Select all 3 files – OVF, VMDK, and MF – and click on Next.

When deploying the appliance using the vSphere Web client, make sure to select all 3 files

When deploying the appliance using the vSphere Web client, make sure to select all 3 files

 

Select a name for the OpenManage appliance and the location where you want it installed. Press Next.

Selecting a deployment location

Selecting a deployment location

 

Select the ESXi host you want the appliance installed on and press Next.

Selecting a target ESXi host

Selecting a target ESXi host

 

Review the appliance deployment details and click Next.

OVF deployment review details

OVF deployment review details

 

Choose between Thin and Thick disk provisioning and select the appropriate storage policy if applicable. Press Next.

Selecting the appliance storage type and datastore

Selecting the appliance storage type and datastore

 

Select a network the appliance will connect to and press Next.

Selecting the network the appliance will connect to

Selecting the network the appliance will connect to

 

Review the final settings and press Finish to start deploying the appliance.

Kicking off the OVF deployment process

Kicking off the OVF deployment process

 

The OVF deployment may take a while depending on the performance of your host. Regardless, wait for the Deploy OVF template and Import OVF package tasks to complete.

Appliance deployment in progress ...

Appliance deployment in progress …

 

Configuring the appliance

Once the deployment tasks compete, proceed to power up the appliance’s VM. Initially, the VM will reboot at least a couple of times while it self-configures and creates an evaluation license. When the login prompt displayed – assuming you are remote consoled to the VM – type admin and press Enter.

Logging in the appliance using remote console

Logging in the appliance using remote console

 

Set a password for the admin user. The minimum password length is set to 8. Complexity is also enforced.

Setting the appliance's admin password

Setting the appliance’s admin password

 

A dialog is then displayed from where one can set the appliance’s time, network configuration and hostname. Start by configure the appliance’s network settings. Clicking on Network Configuration, highlight the network interface and click on Edit.

Setting the appliance's network configuration

Setting the appliance’s network configuration

 

Change to the IPv4 Settings tab. While setting up the network interface, you should preferably set the allocation method to Manual (2) rather than DHCP. Then click on Add and type in the Address, Netmask, and Gateway values for the interface. Optionally, set the DNS Servers and Search Domains values as shown next. IPv6 is disabled by default. Click on Apply when ready

Configuring the network interface

Configuring the network interface

 

It’s a good idea to set the appliance’s hostname. Before doing this, make sure you have created the relevant lookup entries on your DNS server so the hostname can be resolved.

Setting the appliance's hostname after creating the relevant DNS entries

Setting the appliance’s hostname after creating the relevant DNS entries

 

Click on Reboot Appliance on the main screen to ensure that the configuration settings are applied.

Reboot the appliance after you've finished setting it up

Reboot the appliance after you’ve finished setting it up

 

Note: Do not skip the reboot step. In my case, the DNS settings, for instance, were applied only after the appliance was rebooted. This is important, as the next step will fail if you’re registering vCenter using via its FQDN.

 

Registering with vCenter Server

Once the OpenManage appliance is back online, you should now be able to https to the appliance as shown next.

Accessing the Integration appliance via its web console

Accessing the Integration appliance via its web console

 

This next step will register OpenManage with vCenter Server. After you log in using the admin password previously supplied, click on Register New vCenter Server. The Register a new vCenter dialog is displayed.

Type in the hostname name or IP address of your vCenter Server. Note the recommendation of using the vCenter Server’s FQDN rather than the IP address (2). Under vCenter User Account, supply a valid set of credentials – ex. administrator@vsphere.local – to grant permissions to the appliance to register. Pressing Register completes the process.

Registering a vCenter Server instance

Registering a vCenter Server instance

 

Assuming the network settings provided are correct, the appliance will register with vCenter as shown next. You can unregister at any time.

vCenter Server registration status

vCenter Server registration status

 

At this point, you have the option to deploy a standard license or stick with the evaluation license. The latter, as mentioned, gives you a 90-day trial. Have a look at the Appliance Management screen for a detailed view of the appliance’s status.

You can view and manage of the appliance's settings and health from the Appliance Management page

You can view and manage of the appliance’s settings and health from the Appliance Management page

 

Running the Appliance for the first time

Log back in vCenter Server with the vSphere Web Client. If you’re using a previously established session, log off first and log back in again. On the Home screen, you should now see the OpenManage Integration icon.

The OpenManage Integration icon is added to vCenter Server Home screen in vSphere Web client

The OpenManage Integration icon is added to vCenter Server Home screen in vSphere Web client

 

Clicking on the OpenManage Integration icon, kicks off the appliance configuration wizard. Press Next on the Welcome screen.

The appliance must be initialized the first time it used in vCenter

The appliance must be initialized the first time it used in vCenter

 

The previously registered vCenter Server instance is displayed in the vCenters drop-down box along with other vCenter Server you may have registered. Press Next to continue or go back and re-register vCenter Server as previously explained.

Selecting the vCenter Server instance against which the appliance is first initialized

Selecting the vCenter Server instance against which the appliance is first initialized

 

The next step creates a connection profile. In a nutshell, a profile is an authentication object used to connect and talk to the Dell servers running ESXi. If you have iDRACs (management cards) installed on your Dell servers, it’s best to have them readily configured. Press Next to continue.

Setting up connection profiles

Setting up connection profiles

 

On the next screen, type a value for the Profile Name. The Description is optional. Under IDRAC Credentials and Host Root, supply the required credentials allowing full access the iDRAC and the ESXi host respectively. You can use Active Directory authentication if both components have been appropriately configured. Certificate checks should be skipped unless they have been appropriately configured. Press Next.

Populating a connection profile with ESXi and iDRAC credentials

Populating a connection profile with ESXi and iDRAC credentials

 

On the Associated Hosts screen, select the ESXi hosts running the OpenManage VIB. Press Next to continue.

Selecting the ESXi host running the OMSA VIB

Selecting the ESXi host running the OMSA VIB

 

On the Inventory Schedule screen, tick on the Enable Inventory Data Retrieval option and set the times at which you want inventory information collected from the hosts. Here, I’ve set it to run every other day at 2AM in the morning. Press Next to continue.

Setting an inventory collection schedule

Setting an inventory collection schedule

 

Similarly, and optionally, you can configure Warranty Data retrievals from the Dell Online website. This pulls warranty information off the Dell site that match the service tag on your Dell servers. Press Next to continue.

Setting a warranty retrieval schedule

Setting a warranty retrieval schedule

 

Finally, you can enable Events and Alarms for better monitoring. Be extra careful when enabling this option since you can inadvertently put hosts into maintenance mode if your cluster is set to Proactive HA event triggering. Pressing Finish completes the configuration process.

Enabling alarms for critical and warning events

Enabling alarms for critical and warning events

 

Using the OpenManage Integration for VMware vCenter features


Due to post length restrictions, I’ll try and tack this subject in a future post as a tentative part 3 of this series. However, as a taster here, are some of the goodies you can expect.

Using the vSphere Web client, select one of the ESXi hosts running on a Dell server and click on the Summary tab. Depending on your screen layout, you should be able to see two new panes; Dell EMC Host Information and Dell EMC Host Health. From the first pane, you can run a number of tasks which also included updating hardware firmware on your Dell Server. The second pane gives you a quick glance at your server’s health.

 

This last video, courtesy of Dell, provides a quick intro on the features and benefits of OpenManage Integration for VMware vCenter.

 

Conclusion


In this 2-part series, I covered how to install Dell’s OpenManage Server Administrator on ESXi hosts and subsequently, the OMSA Web Server and OpenManage Integration for VMware vCenter appliance, two components which allow you to manage your Dell servers using a dedicated web console and the vSphere Web client respectively. The appliance provides advanced functionality that includes scheduled updates, firmware updates and more. It can be freely evaluated for 90 days after which you must purchase a standard license. One major benefit is that you can now manage your Dell based ESXi hosts directly from vSphere Web client. Try as I may, I could not get this to work with the latest vSphere (HTML5) client so I will update accordingly if this changes anytime soon.

The post How to deploy Dell OpenManage server components for VMware appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/dell-openmanage-server-components-vmware/feed/ 1
How to add ESXi hosts to a distributed switch in template mode https://www.altaro.com/vmware/esxi-distributed-switch-template-mode/ https://www.altaro.com/vmware/esxi-distributed-switch-template-mode/#respond Fri, 26 Jan 2018 09:15:26 +0000 https://www.altaro.com/vmware/?p=17666 Adding ESXi hosts to a distributed switch in template mode allows you to quickly replicate network settings across a number of hosts.

The post How to add ESXi hosts to a distributed switch in template mode appeared first on Altaro DOJO | VMware.

]]>

In vSphere Networking Basics – Part 2, I talked about the vSphere Distributed Switch (vDS) and how it can help you centralize network management across your vSphere environment. On re-reading my post, I realized I left out an interesting vDS feature which is the ability to clone, for lack of a better word, the networking configuration of an ESXi host over to other hosts when hooking them up to a vDS.

This feature, or option if you prefer, is called template mode something only accessible when adding new or existing hosts to a distributed switch. To use this feature, we first set up the template host by creating the required VMkernels and the services they’ll be running such as vMotion and vSAN. When we get to the part of adding hosts to the switch, we just tick an option to have the template host’s network configuration copied over to the remaining hosts. While testing the feature I came across one major drawback, if I can call it that. I’ll address this issue at the end of the post.

Notes:

  • My lab is currently running vSphere 6.5 U1 meaning that some of the steps may differ or will not be available in older versions of vSphere. I’ll also be using the vSphere Web Client since the vSphere client (HTML5), at the time of writing, does not expose the template mode option.
  • Remember that distributed switches are a vCenter Server feature and available only to Enterprise Plus licenses users.

Let’s start off by configuring the template host.

 

Setting up a template host


Setting up an ESXi host for networking involves a few basic items amongst which;

  • Create the VMkernels.
  • Configure TCP/IP information for every VMkernel.
  • Configure each VMkernel for the respective service it will be used for.

For this post, I’m using two ESXi 6.5 hosts. The first will act as the template host and the second will inherit the network settings from the first.

 

Create and configure your VMkernel adapters

To start with, your ESXi hosts must have at least one preconfigured VMkernel adapter if the plan is to join them to vCenter Server and hook them up to a vDS. In this example, I’m creating a second VMkernel on the template host in addition to the default vmk0.

To do this, select the ESXi host in Navigator (1) and click the Configure tab (2). Expand Networking and select VMkernel adapters (3). Click on Add Host Networking button to create a new adapter.

Adding a VMkernel adapter on ESXi

Adding a VMkernel adapter on ESXi

 

Select the first option – VMkernel Network Adapter – and click Next.

Adding a VMkernel adapter on ESXi

Adding a VMkernel adapter on ESXi

 

Leave the switch option as set. We will be migrating the host to a vDS at a later stage. VSwitch0 refers to the default standard switch created on ESXi when installed. Press Next.

Selecting network switch placement

Selecting network switch placement

 

On the next screen, type in a name for the adapter in the Network Label field (1). Choosing a meaningful name, allows you to quickly identify an adapter and its purpose. In this case, I named it vMotion VMkernel since it will be used only for vMotion traffic.

If VLANs are used on your network, set the VLAN ID for the network earmarked for vMotion traffic. Select the IP version – 4, 6 or both – from the IP Settings drop-down box.

As per my vSphere Networking Basics – Part 2, you can change the TCP/IP stack for optimal performance. You can only do this for vMotion and Provisioning traffic unless you create a custom stack. If you do go for a stack other than the default one, keep in mind the following 2 caveats:

  1. The TCP/IP stack type cannot be changed once defined. The only method is to delete the VMkernel and start afresh.
  2. The template mode feature does not copy over the TCP/IP stack type values from the template host.  Instead, the value on the target hosts is always set to Default. I did not find any information on the matter so I am not sure if this is a limitation, bug or by design. If anyone knows, please share via a comment in the box below.

In this example, I set the TCP/IP stack type to vMotion (2) to show, further down, that the value is indeed not copied over to the target host(s). Also note, that vMotion is automatically selected as the only Enabled Service with every other option ghosted out which is the expected behavior.

Configuring a VMkernel adapter

Configuring a VMkernel adapter

 

Next, set the IP setting for the adapter. It’s always a good idea to go for static settings instead of relying on DHCP assigned addressing. Press Next.

Configuring IP addressing for a VMkernel adapter

Configuring IP addressing for a VMkernel adapter

 

Before pressing Finish, review the settings one more time.

VMkernel adapter configuration summary

VMkernel adapter configuration summary

 

The template host has two VMkernel adapters which, at this point, are still connected to the default standard switch vswitch0 present on each host. As a side-note, you can use PowerCLI to quickly retrieve a list of VMkernels on one or all hosts like so:

(get-vmhost).NetworkInfo.VirtualNic | ft VMHost,name,IP
Using PowerCLI to retrieve VMkernel details

Using PowerCLI to retrieve VMkernel details

 

Looking at the command output, esx6-d.vsphere65.local is the designated template host having two VMkernels, vmk0 and vmk1. The target host, esx6-e.vSphere65.local, is configured with one adapter.

 

Adding hosts to a distributed switch in template mode


This section outlines the process of adding ESXi hosts to a distributed switch using template mode. In vSphere Web Client, select the Networking tab (1) and either create a new vDS or select an existing one as I illustrate next. Right-click on the vDS name and select Add and Manage Hosts from the context menu.

Adding ESXi hosts to a distributed switch

Adding ESXi hosts to a distributed switch

 

Select Add hosts on the next screen and press Next.

Adding ESXi hosts to a distributed switch

Adding ESXi hosts to a distributed switch

 

A list of all the ESXi hosts managed by vCenter and not already connected to the vDS are displayed. Select the hosts you want to connect (1) and press Next.

Selecting which hosts to add to a vDS

Selecting which hosts to add to a vDS

 

In the next screenshot, notice the option (1) at the bottom of the next screen which is what enables template mode. Tick on the option and press Next to continue.

Note: Clicking on the information icon next to the option gives you the following:

Use this option if you want to make the network configuration of the physical and VMkernel network adapters on this distributed switch identical on the selected hosts. Mark one of the hosts as a template. Use its existing network configuration or create a new one on the distributed switch, and apply it automatically on the other hosts. The configurations of the physical and VMkernel network adapters are applied separately. You can repeat this process as many times in the wizard as needed to achieve the desired network configuration. A host remains a template only during the lifetime of this instance of the wizard. The distributed switch is not aware of your choice and the network configuration will not be synchronized in the background if a network configuration change occurs on the template or any other host in the selection.

An important point is made here. The template host’s settings are only applied once and only once. New hosts added to the vDS at a later stage will not inherit settings from the host that was previously designated as the template host or from any other host.

Enabling the template mode option

Enabling the template mode option

 

On this next screen, make sure you select the template host, esx6-d in my case, from the list of hosts displayed.

Designating the template host

Designating the template host

 

Since I’m adding newly installed hosts, the first 2 network adapter tasks will suffice. The first one takes care of connecting the ESXi’s physical NICs to the vDS uplinks. The second handles VMkernel adapters, to port mapping on the vDS.

Network adapter tasks

Network adapter tasks

 

This is where we start applying the template host’s network configuration to the other hosts.

In the upper pane, you review the template host’s current uplink settings. Since we’re moving from a standard to a distributed switch, we must change the uplink to one on the vDS as per standard procedure.

The rest of the hosts are listed in the bottom-most pane. You must click on the Apply to all button (4) to copy over the uplink settings from the template to the rest of the hosts. Click Next to continue.

Setting uplink ports in template mode

Setting uplink ports in template mode

 

Next, we set the template host’s VMkernel adapters (1) to use the distributed port group on the vDS we’re connecting to via the Assign port group button (2). Clicking on Apply to all (3) copies the portgroup settings to the hosts. Notice how the wizard creates an additional vmk1 VMkernel adapter marked new as shown next.

This basically answers a question a reader had brought up – Does template mode create new VMkernel adapters? – something I wasn’t sure about myself, hence this post!

Setting VMkernel adapters in template mode

Setting VMkernel adapters in template mode

 

Since it’s a very bad idea to have ESXi hosts and VMkernels configured with identical IP addresses, you will be prompted to enter new IP addresses for all the target VMkernel adapters, both existing and any new ones being created. Since vmk0 on target hosts will be already configured – this is always the case since it would be impossible to add it to vCenter – your only option is to type in the currently assigned IP address (1) as you cannot leave the IPv4 address field blank. Likewise, you must type in the IP address of any other VMkernel being created.

Configuring IP addressing for the various VMkernel adapters

Configuring IP addressing for the various VMkernel adapters

 

Press Next to skip the next screen.

 

Pressing Finish completes the template host network configuration cloning process.

Completing the process

Completing the process

 

When configuring multiple hosts, use the status or recent tasks window to monitor the progress. The next screenshot shows the addition and configuration of vmk1, the second VMkernel adapter created on esx6-e.vsphere65.local.

Monitoring network tasks in the recent tasks window

Monitoring network tasks in the recent tasks window

 

You can run the previous PowerCLI command to verify that the vMotion VMkernel has been created and assigned the correct IP address.

Using PowerCLI yet again to retrieve VMkernel information

Using PowerCLI yet again to retrieve VMkernel information

 

Additionally, use the vSphere Web client or the esxcli command to verify the same. As shown next, the correct configuration has been applied save for the TCP/IP stack type which is set to Default instead of vMotion as originally configured.

Inspecting VMkernel details in vSphere Web Client

Inspecting VMkernel details in vSphere Web Client

 

Being a doubting Thomas, I had to rule out possible glitches with the web client’s UI. To do this, I ran the following esxcli command from an SSH session. The command outputs information about ESXi’s VMkernel adapters. The Network Instance property is set to defaultTCPipStack, meaning there’s nothing wrong with the UI!

esxcli network ip interface list
Using the esxcli for more detailed VMkernel info

Using the esxcli command for more detailed VMkernel info

 

Conclusion


Adding hosts to a vDS using template mode is a convenient way of ensuring a homogeneous network configuration across your ESXi hosts. The one drawback I came across, is the inability to set the TCP/IP stack value as per the template host’s configuration. The fact that you have to type in the IP addresses for preconfigured VMkernel adapters is also somewhat annoying but something I can live with.

Besides Part 2 of the networking series, you might also want to give Part 1 a read to learn about standard switching in vSphere if this is all new to you.

Ask Me Anything!


If anything here is unclear or you have anything to add please don’t hesitate to write to me in the comments section below. I’m always happy to help my fellow IT pros (and I also learn from you guys too!)

The post How to add ESXi hosts to a distributed switch in template mode appeared first on Altaro DOJO | VMware.

]]>
https://www.altaro.com/vmware/esxi-distributed-switch-template-mode/feed/ 0