Automating Ubuntu 18.04 vSphere Templates with HashiCorp Packer

Automating Ubuntu 18.04 vSphere Templates with HashiCorp Packer

· Read in about 7 min · (1390 words) ·

Building Templates Manually is Boring AF

Quite a while back I did a post around leveraging HashiCorp’s Packer product to build vSphere Templates. There was a couple gaps that existed coming out of that post…

  • I was wicked new to Packer and had much to learn
  • The Packer post-provisioner was immature when it came to vSphere Template conversion, meaning, it couldn’t do it at the time

Refresher on Packer

What is Packer? Packer is a tool for automating the build of images. These images can be any number of end state artifacts; based on Packer “builders”. Out of the box the Packer build engine supports common platforms like AWS EC2, Azure, GCE, vSphere and many others. This builder functionality can be extended or enhanced leveraging a vast plugin community. Configurations for these items are stored in a build manifest that Packer uses to determine configurations. Once Packer completes it’s “Builder” task, it moves into it’s provisioner task. This task is centric around “in guest” operations. Content such as Ansible/Chef/Puppet configurations, shell executions, reboots, etc… Finally once the provisioner tasks are completed, it moves into post-processing. This is most commonly things like uploading to endpoints, pushing code, exporting artifacts, etc… Within each phase you can chain a number of these concepts together as well. You aren’t limited to just one path.

To me, Packer is one of the truest forms of infrastructure as code. Your build is completely defined in the build manifest and supporting artifacts. It’s clear to see what you are “getting” out of the build process, and it’s easy to make changes and adjust for future builds. As a result of being a true Infrastructure as Code approach, your end result is consistent across all builds.

An additional benefit to Packer is that you can establish a consistent image across all clouds you are interacting with, public or private. You can even create Docker images with it!

As you can see, there are a TON of use cases that Packer could satisfy. Today however, we are going to focus primarily on building a new template for vSphere.

Getting Started

Recently, with my work on VMware Cloud Automation Services (go check it out and sign up for a 30 day trial!) I had a need to get a Ubuntu 18.04 template built. I saw this as an opportunity to revisit Packer and look at dropping a little update on the topic.

While getting started on this, I found that better ways to accomplish these builds has emerged. I’m going to summarize these things in this post. To get started, we’re going to need the following items…

I’m doing this from a Mac, but the process should roughly be the same on all platforms. Once you download the Packer binaries, extract them and place them in a location that our path variables can see.

unzip packer_1.3.5_darwin_amd64.zip
sudo chmod +x packer && mv packer /usr/bin/local/packer

We can also go ahead and clone down the Github repo for my content as well

git clone https://github.com/codyde/packer-vsphere-builds
cd packer-vsphere-builds/ubuntu-18/

Within this folder we have a few files…

  • ubuntu-18.json - This file is the builder manifest. It contains all the required configurations for packer to build
  • variables.json.sample - This is a sample variables file for the previously mentioned manifest. Remove sample off of the end if you intend on using it, or update the ubuntu-18.json file with static values if you wish
  • preseed.cfg - This file is where the Ubuntu unattended install configurations are. I’d highly recommend giving the preseed documentation a read - https://help.ubuntu.com/lts/installation-guide/s390x/apb.html

Now, if we run a validate action from Packer via the CLI…

packer validate

We will receive an error indicating that the vsphere-iso builder doesn’t exist. This is because we need the Jetbrains plugin. We can easily pull this down directly from the Jetbrains Github repository.

# Linux
wget https://github.com/jetbrains-infra/packer-builder-vsphere/releases/download/v2.3/packer-builder-vsphere-iso.linux
# MacOS
wget https://github.com/jetbrains-infra/packer-builder-vsphere/releases/download/v2.3/packer-builder-vsphere-iso.macos

Execute a “chmod +x” against either of these, and rerun our validate. It should validate successfully. Yay!

Before We Build

There are a few things we need to update. Let’s open our variables.json (assuming you are using it, like you should…)

{
    "vcenter_server":"vcenter",
    "username":"administrator@vsphere.local",
    "password":"VMware123!",
    "datastore":"vsanDatastore",
    "folder": "_Templates",
    "host":"esxihost",
    "cluster": "workercluster",
    "network": "VM Network",
    "ssh_username": "vm preseed user",
    "ssh_password": "vm preseed password"
}

These variables will be passed directly into the builder during execution. Update these with values from your environment. Once these are updated, save and close.

Next let’s open and validate our preseed.cfg file…

d-i passwd/user-fullname string hladmin
d-i passwd/username string hladmin
d-i passwd/user-password password VMware123!
d-i passwd/user-password-again password VMware123!
d-i user-setup/allow-password-weak boolean true

d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string regular
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true

d-i passwd/root-login boolean true
d-i passwd/root-password password VMware123!
d-i passwd/root-password-again password VMware123!


d-i pkgsel/include string open-vm-tools openssh-server cloud-init

d-i grub-installer/only_debian boolean true

d-i preseed/late_command string \
    echo 'hladmin ALL=(ALL) NOPASSWD: ALL' > /target/etc/sudoers.d/hladmin ; \
    in-target chmod 440 /etc/sudoers.d/hladmin ;

d-i finish-install/reboot_in_progress note

There are a few things we’ll want to update in here as well. Note the username and password fields. Make sure you match these with the SSH credentials you added to your variables file. Also make sure you update the late_command to match the username that you are leveraging. Finally, also make sure you add any additional packages you want to install during the build sequence to the pkgsel/include line.

Finally, lets open our ubuntu-18.json file and observe the configuration

{
    "builders": [
      {
        "type": "vsphere-iso",
  
        "vcenter_server":      "{{user `vcenter_server`}}",
        "username":            "{{user `username`}}",
        "password":            "{{user `password`}}",
        "insecure_connection": "true",
  
        "vm_name": "template_ubuntu18",
        "datastore": "{{user `datastore`}}",
        "folder": "{{user `folder`}}",
        "host":     "{{user `host`}}",
        "convert_to_template": "true",
        "cluster": "{{user `cluster`}}",
        "network": "{{user `network`}}",
        "boot_order": "disk,cdrom",
  
        "guest_os_type": "ubuntu64Guest",
  
        "ssh_username": "{{user `ssh_username`}}",
        "ssh_password": "{{user `ssh_password`}}",
  
        "CPUs":             2,
        "RAM":              2048,
        "RAM_reserve_all": true,
  
        "disk_controller_type":  "pvscsi",
        "disk_size":        32768,
        "disk_thin_provisioned": true,
  
        "network_card": "vmxnet3",
  
        "iso_urls": "http://cdimage.ubuntu.com/releases/18.04/release/ubuntu-18.04.2-server-amd64.iso",
        "iso_checksum": "a2cb36dc010d98ad9253ea5ad5a07fd6b409e3412c48f1860536970b073c98f5",
        "iso_checksum_type": "sha256",

        "floppy_files": [
          "./preseed.cfg"
        ],
        "boot_command": [
          "<enter><wait><f6><wait><esc><wait>",
          "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
          "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
          "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
          "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
          "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
          "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
          "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
          "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
          "<bs><bs><bs>",
          "/install/vmlinuz",
          " initrd=/install/initrd.gz",
          " priority=critical",
          " locale=en_US",
          " file=/media/preseed.cfg",
          "<enter>"
        ]
      }
    ],
  
    "provisioners": [
      {
        "type": "shell",
        "inline": ["echo 'template build complete'"]
      }
    ]
  }

You shouldn’t have to change much in here if you are just doing a simple build. As new versions of Ubuntu are released, you may look at updating the iso_urls and checksum fields - but this template should be sufficient.

Packer Build!

With these in place, we can go ahead and start our build process with the following command

sudo packer build -var-file variables.json ubuntu-18.json

When the build starts, you’ll see the Ubuntu 18 ISO starts to download into the cache location…

Once that download completes, the real magic will start as Packer builds a VM on your host and starts the provisioning process!

If you launch the vSphere console, and look at the VM you can see it running through it’s build process using the preseed method

Conclusion

With any luck, after about 10-15 minutes, the build will complete and will automatically mark itself as a template!

In my lab, I’ve wired this up to VMware Code Stream in our new SaaS based automation platform, you can see a screenshot of one of the dashboard tiles below showing the time to completion. 11 minutes and 12 seconds ain’t bad!

Code Stream is VMware’s Pipelining and Release Management platform. Historically it’s been used for software releases (heavily within VMware I might add…) but in Cloud Automation Services it’s capabilities have been drastically overhauled/refreshed in favor of not just software release but also infrastructure pipelining. It’s got hooks into Kubernetes, various Git providers, Orchestrator, Cloud Assembly, and much more. It even powers updating this blog when I drop new hotness!

I’m working on a blog for the VMware Cloud Management blog surrounding how to build/test/execute this pipeline. Expect it this week (hopefully!). In this blog, you’ll see how I created a Cloud Assembly blueprint for deploying a VM that spins up an ephemeral VM, builds the new Ubuntu 18 template, pushes it into vSphere, and then creates a pipeline mapped to a GitHub repository to drive the execution. This pipeline will execute the build in Cloud Assembly, notify me on completion, and destroy it after the fact. Stay tuned!