October 30, 2017

Using Packer to Automate vSphere Template Builds

This weekend I discovered Packer from HashiCorp.

And My World Was Changed…Just Way After Everyone Else

I had decided to add a few more catalog items to vRealize Automation and realized that I didn’t have any Ubuntu templates loaded in my environment. I figured I would load up a couple builds - specifically 16.04 and 17.04. I hopped on Google and started looking for best practices around configuring a Ubuntu template and stumbled upon articles discussing automating template builds with something called Packer. I’ve worked with Vagrant many times in the past, and I just spent a ton of time with another colleague checking out some Terrafrom providers. Summed up - HashiCorp has nerd-street-cred on lock :)

Admittedly, it’s a little embarassing to be so excited about what Packer is. When I dug around the interwebs; I realized I’m extremely late to the party. It’s like that guy who pops on Twitter and freaks out about Jon Snow spoilers. I digress.

Why Do I Think Packer Is So Cool?

I’m a firm believer that a fundamental principal moving forward in the IT field is “Infrastructure as Code”. There’s a growing focus across all types of business around “codifying” their infrastructure builds. A few of the key reasons are below:

  • Automation - Whether its vRA, Ansible, Puppet, Chef; Infrastructure and Configuration as code enables automation platforms to consume the code and get stuff done.
  • Ease of Troubleshooting - All configuration options should be listed in the code. Parse through and change settings around. It’s easy to iterate new builds from those configurations and understand what changes result in success/failure.
  • Knowledge Transfer - Once your deployment is in code; it’s easy to step colleagues through it to teach them the whys and the hows of your configuration.
  • Version Control - It’s easy to see changes to the baseline template build. This helps with compliance; even just knowing what “Carl” is changing around in the latest template build. It doesn’t help when “Carl” doesn’t use good commit notes; but that’s addressible through other avenues :)

Packer is the embodiment of the infrastructure as code model. JSON files and configuration files interface with Packer’s own “Builders”, “Provisioners” and “Post-Processors” to instantiate a deployment. In our example; it will automatically pull down the Ubuntu ISO, check its hash to ensure it’s “intact”, use a preseed file to configure Ubuntu, issue boot commands, and much more!

What you’re left with at the end is an easy file that you can convert over to a template. The framework is there to automate the next step; however that part is not functional currently (unless someone speaks up to tell me otherwise…) and I’ve included details about the PR submitted to resolve this at the end of this post.

Getting Started

The Packer model takes a standard “Pre”, “During” and “Post” approach to infrastructure builds. These are aptly titled “Builders”, “Provisioners” and “Post-Processors”. To those ends, Packer has deep integrations with many different platforms. For our example, we’re specifically concerned with the vSphere integration.

Before we can do anything, we need to install Packer. We can do this one of two ways…

  • By downloading the executable for our operating system (Windows in my case) directly from Packer.io
  • Using Chocolatey, the popular package manager for Windows.

I’m on a bit of a Chocolatey kick lately; so I used that for my installation. Getting started with Chocolatey is super easy. using an administrative PowerShell prompt simply run…

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

…and Chocolatey will be installed. From there, we can run choco install packer to install Packer. The benefit of going this route is that your path variables are automatically configured so you don’t need to drop the Packer executable into the directory where your files are at! With Packer installed; we’re ready to prep our VMware environment.

VMware Environment Prep

There are a couple of things we need to setup from to leverage the “vmware-iso” “Builder”. It’s a good idea to check out HashiCorps documentation on the topic even though I cover parts of it down here. This guide focuses on building directly on a vSphere host, even though Packer can build on VMware Workstation or Fusion as well.

We have 2 things we need to complete to start the build -

  • Enable the advanced setting “GuestIPHack” which allows an IP address to be determined from an ARP packet by the host
  • Enable ESXi Firewall Rules for Packer to communicate over VNC.

To enable the GuestIPHack, SSH to your ESXi host and run the following command

esxcli system settings advanced set -o /Net/GuestIPHack -i 1

Setting up the firewall rule takes a few more steps - refer to KB 2008226 for more details. Thanks to Nick Charlton for these steps on his blog here

Update the permissions for the firewall service xml file to allow us to upload our new firewall changes directly.

chmod 644 /etc/vmware/firewalll/service.xml
chmod +t /etc/vmware/firewalll/service.xml

Append the following ranges to the end of the file, within the configuration section…

<service id="1000">
  <rule id="0000">

Restore file permissions and reload firewall…

chmod 444 /etc/vmware/firewall/service.xml
esxcli network firewall refresh

With our environment prepped; we’re ready to move forward with building the configuration files that Packer will use to build our template.

Packer Configuration Files

Nick also hosts a great set of Packer configurations that can be forked and customied for you rown use. Ultimately, I used these to get started on my own build. You can find them here and they are as a great start to get you building quickly.

There are a few files that we need to move forward:

  • Configuration JSON for Packer (Ubuntu-1604.json)
  • Secure Variables File for Configuration (variables.json; added to our .gitignore to prevent sending our credentials into GitHub)
  • Ubuntu Preseed (Ubuntu-1604-Preseed.cfg)
  • Open-VM-Tools Installation Script (open-vm-tools.sh)

Looking at our configuration JSON (again, based on Nicks configuration with some slight tweaks)


  "builders": [{
    "name": "template_ubuntu1604",
    "vm_name": "Template_Ubuntu1604",
    "type": "vmware-iso",
    "guest_os_type": "ubuntu-64",
    "tools_upload_flavor": "linux",
    "headless": false,

    "iso_url": "http://releases.ubuntu.com/16.04/ubuntu-16.04.3-server-amd64.iso",
    "iso_checksum": "1384ac8f2c2a6479ba2a9cbe90a585618834560c477a699a4a7ebe7b5345ddc1",
    "iso_checksum_type": "sha256",
    "vnc_disable_password": "True",

    "ssh_username": "humblelab",
    "ssh_password": "humblelab",
    "ssh_timeout": "15m",

    "disk_type_id": "thin",

    "floppy_files": [

    "boot_command": [
      "/install/vmlinuz noapic ",
      "preseed/file=/floppy/Ubuntu-1604-Preseed.cfg ",
      "debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
      "hostname={{ .Name }} ",
      "fb=false debconf/frontend=noninteractive ",
      "keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA ",
      "keyboard-configuration/variant=USA console-setup/ask_detect=false ",
      "grub-installer/bootdev=/dev/sda ",
      "initrd=/install/initrd.gz -- <enter>"

    "shutdown_command": "echo 'shutdown -P now' > shutdown.sh; echo 'humblelab'|sudo -S sh 'shutdown.sh'",

    "remote_type": "esx5",
    "remote_host": "{{user `esxi_host`}}",
    "remote_datastore": "{{user `esxi_datastore`}}",
    "remote_username": "{{user `esxi_username`}}",
    "remote_password": "{{user `esxi_password`}}",
    "keep_registered": true,

    "vmx_data": {
      "ethernet0.networkName": "Common"

  "provisioners": [
      "type": "shell",
      "scripts": [

      "execute_command": "echo 'humblelab' | {{ .Vars }} sudo -E -S bash '{{ .Path }}'"

Note: I was running into errors regarding the automatic configuration of a VNC password; so disabled it using "vnc_disable_password": "True" this was one of a few changes I made.

Major call-outs for this configuration file are below:

  • We configure VM properties; things like name, deployment name, what builder were using, and guest OS type to support the build we are targeting. We also set disk type and network that the machine should be deployed to. Remember to use a DHCP enabled network or configure static values within your preseed file and this configuration file if you want to go a different route.
  • We target the ISO build location URL as well as the sha256 hash for the file. This are readily available from ubuntu within their repository
  • We target a username and a password for the VNC connection to occur over. These users are going to be created in our Ubuntu-1604-Preseed.cfg file.
  • We Target our preseed and issue a very specific boot command keyed to the exact keystrokes we need to use. This is tedious but absolutely necessary. Read the docs!
  • We configure specific shutdown commands, and ESXi host targets.
  • We configure a script to be ran; which basically uses apt to install open-vm-tools.

As you can see in the JSON; we’re using variables for the ESXi values. This allows us to keep our configurations in a separate file that we can apply better security to. Lets check out how that file is structured next.


  "esxi_host": "hltenesxi01.humblelab.com",
  "esxi_datastore": "hl-block-ds01",
  "esxi_username": "root",
  "esxi_password": "VMware123!"

These values match up perfectly with what we called out in our configuration JSON. Easy stuff.

Next up, we need to setup our preseed file.


# Based upon Nick Charlton's Example at https://github.com/nickcharlton/packer-esxi
# https://nickcharlton.net/posts/using-packer-esxi-6.html

# localisation
d-i debian-installer/locale string en_US.utf8
d-i console-setup/ask_detect boolean false
d-i keyboard-configuration/layoutcode string us

# networking
d-i netcfg/choose_interface select auto
d-i netcfg/get_hostname string ubuntu16temp
d-i netcfg/get_domain string humblelab.com
d-i netcfg/wireless_wep string

# apt mirrors
d-i mirror/country string manual
d-i mirror/http/hostname string archive.ubuntu.com
d-i mirror/http/directory string /ubuntu
d-i mirror/http/proxy string

# clock and time zone
d-i clock-setup/utc boolean true
d-i time/zone string GMT
d-i clock-setup/ntp boolean true

# partitioning
d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-md/device_remove_md boolean true
d-i partman-lvm/confirm boolean true
# fix: http://serverfault.com/questions/189328/ubuntu-kickstart-installation-using-lvm-waits-for-input
d-i partman-lvm/confirm_nooverwrite boolean true
d-i partman-auto-lvm/guided_size string max
d-i partman-auto/choose_recipe select atomic
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true

# users
d-i passwd/root-login boolean true
d-i passwd/root-password password VMware123!
d-i passwd/root-password-again password VMware123!
d-i passwd/user-fullname string humblelab template
d-i passwd/username string humblelab
d-i passwd/user-password password humblelab
d-i passwd/user-password-again password humblelab
d-i user-setup/allow-password-weak boolean true
d-i user-setup/encrypt-home boolean false

# packages
tasksel tasksel/first multiselect standard, ubuntu-server
d-i pkgsel/install-language-support boolean false
d-i pkgsel/include string openssh-server nfs-common curl git-core
d-i pkgsel/upgrade select full-upgrade
d-i pkgsel/update-policy select none
postfix postfix/main_mailer_type select No configuration

# boot loader
d-i grub-installer/only_debian boolean true

d-i preseed/late_command string \
    echo 'humblelab ALL=(ALL) NOPASSWD: ALL' > /target/etc/sudoers.d/humblelab ; \
    in-target chmod 440 /etc/sudoers.d/humblelab ;

# hide the shutdown notice
d-i finish-install/reboot_in_progress note

This file is massive. You’ll want to pay a lot of attention to what’s configured in here; beacuse it can make your life A LOT easier, or A LOT worse. The configuration above is based on my environment and again borrows heavily from what Nick used in his. Take special note of the below chunk which I added…

d-i preseed/late_command string \
    echo 'humblelab ALL=(ALL) NOPASSWD: ALL' > /target/etc/sudoers.d/humblelab ; \
    in-target chmod 440 /etc/sudoers.d/humblelab ;

I was running into permission problems in my build around root permissions; specifically in the 17.xx builds. Adding my user to sudoers helped resolve that.

The easiest file of them all is the open-vm-tools.sh file which simply has the following command in it:


apt-get install -qy open-vm-tools

With those files in place, we’re ready to run our actual Packer build!

Running Our Packer Build

With Packer installed via Chocolatey and all of our configuration files in the same directory, we can kick off the build. CD to the director with your configuration files and run packer build -var-file variables.json Ubuntu-1604.json from the prompt.

If everything is configured correctly you should see some output similar to the below, and the build will start!

<img src="/images/2017-10-31-using-packer-to-automate-templates/packer-1.jpg#center" alt=“Packer Build” style=“width 700px”;> Note: You’ll typically see the ISO download. Mine didn’t in this case because I already had it cached

If we check out the VM inside of vCenter or connecting directly to the ESXi host, we can see the installation process moving forward. Below we can see the boot command being typed out on the bottom of the screen.

<img src="/images/2017-10-31-using-packer-to-automate-templates/ubuntu-install.JPG#center" alt=“Packer Build” style=“width 700px”;>

The IP is automatically detected from the ESXi host via ARP responses thanks to the GuestIPHack we enabled earlier. Once SSH is available, you’ll see the command window start running again and move forward with our script installations and then a graceful shutdown of the system.

<img src="/images/2017-10-31-using-packer-to-automate-templates/ssh-success.jpg#center" alt=“SSH Packer Build” style=“width 700px”;>

After all of our scripted installations are complete, we’ll see the Packer command line indicate that the build is complete, and the machine has been successfully shut down. All components are cleaned up, and the machine is left in a powered off state to be converted over to a template.

<img src="/images/2017-10-31-using-packer-to-automate-templates/packer-complete.jpg#center" alt=“Packer Complete” style=“width 700px”;>

Unfortunately, in it’s current configuration - Packer doesn’t appear to be able to take the handoff from this point to automatically convert the resulting VM into a template. There currenly is a PR open on the Packer GitHub to address this - enable vsphere-template post processor to work with local builders. I’ll be keeping an eye out for this one!

From here, we can easily log into vCenter of PowerCLI and convert to a template as normal!


Once the process completes; we now have an automated build of a vCenter Template. We can take this bundle of files, and commit it to a GitHub repository as a code representation of our gold image. For the short term, make sure you check out Nick’s repository for packer-esxi to get started quickly!

Looking at the files - it’s easy for us to make some simple changes to move to pulling down Ubuntu 17.10. There are tons of examples on the internet of additional Packer files for a variety of operating systems. On top of that, there is a mature plugin community built around Packer to extend the platform even further and enable new integrations!

HashiCorp is a pretty exciting company; and is actively developing a ton of products. It seems like everything they touch turns to gold these days and Packer is just another example of a useful tool from their toolbelt!

(c) 2021 Copyright TheHumbleLab