Deploying Kubernetes ClusterAPI on vSphere

Deploying Kubernetes ClusterAPI on vSphere

· Read in about 17 min · (3502 words) ·

The Next Generation of Deploying Kubernetes

The Cluster API Kubernetes SIG aims to drastically simplify Kubernetes cluster lifecycle in a given cloud environment by providing a declarative API on top of the common functions that you would normally perform against a cluster. In the case of todays vAdmin (today being a time before Project Pacific), we look to the Cluster API for vSphere provider (known as CAPV, for short).

It’s important to note that Cluster API is a huge part of Project Pacific. Project Pacific aim’s to make Kubernetes clusters and their resources first class citizens inside of vCenter/vSphere, where Cluster API looks to provide a standardized method of working with cluster lifecycle operations. For example, Guest Clusters in Project Pacific use Cluster API for their lifecycle management.

For more information about Project Pacific, I highly recommend reading the following:

Back to Cluster API

I had tried to take this for a spin before, but due to shifting priorities - I never had the time to really debug some of the issues in my environment that were holding me back. With my transition over to the Cloud Native Apps group this week, I felt like it was a good time to dive in and figure out what was going on. I jumped into the Kubernetes Slack channel - and as luck would have it, my issue was on the forefront of the #cluster-api-vsphere channel - more on that later.

Myles Gray did a write up of the setup process previously, and it’s by far the best one I’ve seen so far. With the speed of the project (CCM and CSI added just this week!) Some of the steps are slightly out of date.

I thought it would be a good idea to throw down my findings throughout the process.

Prerequisites

There’s a few things we’ll need to have in place before we get started. i’m not going to go through all of the steps for these as they should be pretty straight-forward

  • Import the CAPV Images into vCenter as a template - The images for this can be found on the main CAPV GitHub page here. There’s currently a bug with the CentOS 7 image, so I used the Ubuntu 1804 one.
  • Install KIND on your workstation - Steps for installing can be found here
  • Install and configure clusterctl (also able to be found on the CAPV GitHub here)
  • Install kubectl - Steps documented here

Getting Started

Once all of this is in place, we can get started with the actual CAPV installation. A lot of these steps are duplicated from the actual getting started guide - I’ll do my best to call out the gotchas I had along the way.

Gotcha 1 - Importing the Template

When I imported the template, I originally placed it on one of my iSCSI datastores. I recently added a second single node cluster to my vSphere environment, and have a 1-Node vSAN configuration running there. I imported the OVA as a template a second time, and named it the same. This caused issues. Ensure your templates are uniquely named (best practice anyways).

Create Your Environment Variables

We’ll need to create a text file to store our environment variables for your vCenter and CAPV installation. The below is taken straight from the Getting Started guide, updated with sample variables for my environment -

$ cat <<EOF >envvars.txt
# vCenter config/credentials
export VSPHERE_SERVER='hlcorevc01.humblelab.com'
export VSPHERE_USERNAME='administrator@vsphere.local'
export VSPHERE_PASSWORD='VMware123!'

# vSphere deployment configs
export VSPHERE_DATACENTER='SDDC-Datacenter'
export VSPHERE_DATASTORE='DefaultDatastore'
export VSPHERE_NETWORK='vm-network-1'
export VSPHERE_RESOURCE_POOL='ClusterAPI'
export VSPHERE_FOLDER='ClusterAPI Resources'
export VSPHERE_TEMPLATE='ubuntu-1804-kube-v1.15.3'
export VSPHERE_DISK_GIB='50'
export VSPHERE_NUM_CPUS='2'
export VSPHERE_MEM_MIB='2048'
export SSH_AUTHORIZED_KEY='ssh-rsa mysshkeyisreallylong'

# Kubernetes configs
export KUBERNETES_VERSION='1.15.3'
export SERVICE_CIDR='100.64.0.0/13'
export CLUSTER_CIDR='100.96.0.0/11'
export SERVICE_DOMAIN='cluster.local'
EOF

Running this command from a shell prompt (like on my Mac) will produce an envvars.txt file that holds all of these items.

Note: While in the Kubernetes CAPV slack channel, I did see a few of the users talking about using their vCenter IP address, however since the vCenter certificate didn’t have the IP address specified as part of the cert, validation was failing. Best case - use a valid cert on your vCenter. That being said - we all homelab, so, make sure you’re using whatever is in your cert as the connection details :)

Building Your Environment

We need a set of manifest files for clusterctl to use to bootstrap our CAPV host in vCenter. These manifest files will hold all the configuration details for the varying hosts involved in the Cluster API process.

Issue 2 - Aged Manifest Image

These manifest files change as the varying parts of Cluster API are updated are updated (which happens quite often right now). I ran into an issue where, because I had tested CAPV previously, my local releases/manifest image was out of date. In order to fix this, you can either do a docker pull gcr.io/cluster-api-provider-vsphere/release/manifests:v0.5.2-beta.0 or simply remove your existing image using docker rmi gcr.io/cluster-api-provider-vsphere/release/manifests:latest to force it to re-download.

Issue the following command to generate the manifests that are needed. This command will mount your envvars.txt file into the manifest generator, and output the necessary files into a .out/name-of-cluster directory. In this case, we’ve used the -c flag to set the name of the cluster to clusterapi-mgt-humblelab. I’ve also specified the manifests image I wanted to use.

docker run --rm \
  -v "$(pwd)":/out \
  -v "$(pwd)/envvars.txt":/envvars.txt:ro \
  gcr.io/cluster-api-provider-vsphere/release/manifests:v0.5.2-beta.0 \
  -c clusterapi-mgt-humblelab

You should be able to navigate to the ./out/management-cluster directory to see the YAML files that have been generated (which were also output onto the screen).

Generated ./out/clusterapi-mgt-humblelab/addons.yaml
Generated ./out/clusterapi-mgt-humblelab/cluster.yaml
Generated ./out/clusterapi-mgt-humblelab/controlplane.yaml
Generated ./out/clusterapi-mgt-humblelab/machinedeployment.yaml
Generated /build/examples/default/provider-components/provider-components-cluster-api.yaml
Generated /build/examples/default/provider-components/provider-components-kubeadm.yaml
Generated /build/examples/default/provider-components/provider-components-vsphere.yaml
Generated ./out/clusterapi-mgt-humblelab/provider-components.yaml
WARNING: ./out/clusterapi-mgt-humblelab/provider-components.yaml includes vSphere credentials

We’ll be back to these files shortly.

Bootstrapping the Cluster

With the files in hand, we can initiate the bootstrap operation. I find this part especially cool as it spins up a KIND cluster (Kubernetes-in-Docker) that acts as a temporary management plane to deploy out the Cluster API management plane to vCenter. Once that single host management cluster comes, the management plane is transferred over, and the node living in your vCenter becomes the management host.

The bootstrapping of the management cluster is different from the creation of workload clusters as clusterctl will ingest our manifests as part of the create cluster command. What’s interesting though is that the same manifest files are used to create the management plane. One of the key drivers of the Cluster API project was to standardize the way clusters are created and managed. The fact that management cluster manifests and workload manifests are largely the same supports that concept.

We use clusterctl to complete this operation.

clusterctl create cluster \
  --bootstrap-type kind \
  --bootstrap-flags name=clusterapi-mgt-humblelab \
  --cluster ./out/clusterapi-mgt-humblelab/cluster.yaml \
  --machines ./out/clusterapi-mgt-humblelab/controlplane.yaml \
  --provider-components ./out/clusterapi-mgt-humblelab/provider-components.yaml \
  --addon-components ./out/clusterapi-mgt-humblelab/addons.yaml \
  --kubeconfig-out ./out/clusterapi-mgt-humblelab/kubeconfig

As mentioned above, this command creates a KIND cluster that has the management components for Cluster API. You can actually use KIND to generate the kubeconfig for this cluster and observe the contents as they execute. This is useful for troubleshooting issues with CAPV and will ultimately need to be done if you need to open an issue on the GitHub for any reason.

If you monitor vCenter during this build, you should see the image you imported earlier clone into a clusterapi-mgt-humblelab-controlplane-0 VM (or whatever you chose to name it).

Once completed, you’ll receive a message indicating that your kubeconfig is stored in the –kubeconfig-out directory specified in the earlier command. You can see a sample from my environment below. A lot more is displayed than this - but I’m showing the good stuff :)

I1003 22:27:29.165432   51442 applymachines.go:44] Creating machines in namespace "default"
I1003 22:33:59.192989   51442 clusterdeployer.go:110] Creating target cluster
I1003 22:33:59.267866   51442 applyaddons.go:25] Applying Addons
I1003 22:34:01.812666   51442 clusterdeployer.go:128] Pivoting Cluster API stack to target cluster
I1003 22:34:01.812743   51442 pivot.go:76] Applying Cluster API Provider Components to Target Cluster
I1003 22:34:05.464659   51442 pivot.go:81] Pivoting Cluster API objects from bootstrap to target cluster.
I1003 22:34:46.209632   51442 clusterdeployer.go:133] Saving provider components to the target cluster
I1003 22:34:47.728127   51442 clusterdeployer.go:155] Creating node machines in target cluster.
I1003 22:34:47.736108   51442 applymachines.go:44] Creating machines in namespace "default"
I1003 22:34:47.736128   51442 clusterdeployer.go:169] Done provisioning cluster. You can now access your cluster with kubectl --kubeconfig ./out/clusterapi-mgt-humblelab/kubeconfig
I1003 22:34:47.736744   51442 createbootstrapcluster.go:36] Cleaning up bootstrap cluster.

We’re done with clusterctl now and we will be using kubectl and the management node moving forward, the same as we would any other Kubernetes workload. We’ll export our new KUBECONFIG and roll forward…

export KUBECONFIG=./out/clusterapi-mgt-humblelab/kubeconfig

Note in my case im using a path relative to the directory im in. Keep that in mind when using copy and paste :)

If you run a kubectl –kubeconfig ./out/clusterapi-mgt-humblelab/kubeconfig get pods -A you can see all of the pods that are running in this management cluster. Using a kubectl logs name-of-pod -n namespace will allow you to see the logs for a given pod. Again, this is useful for troubleshooting any issues that might come up along the way.

Deploying Workload Clusters

At this point we have our management plane up and running and can begin deploying clusters.

Note: Today, CAPV doesn’t support HA masters. The load-balancing space in the provider is an evolving thing - so until that’s solved, we’ll be dealing with single master configurations.

We’ll use our manifest generation command again to quickly get us a set of workload cluster manifests. In my environment, I’ll use the command below…

docker run --rm \
  -v "$(pwd)":/out \
  -v "$(pwd)/envvars.txt":/envvars.txt:ro \
  gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
  -c humblelab-k8s-workload-1

If all went as expected, you should see the below output (or something similar)

 codydearkland@MacBook-Pro> docker run --rm \
  -v "$(pwd)":/out \
  -v "$(pwd)/envvars.txt":/envvars.txt:ro \
  gcr.io/cluster-api-provider-vsphere/release/manifests:v0.5.2-beta.0 \
  -c humblelab-k8s-workload-1
Unable to find image 'gcr.io/cluster-api-provider-vsphere/release/manifests:v0.5.2-beta.0' locally
v0.5.2-beta.0: Pulling from cluster-api-provider-vsphere/release/manifests
fc7181108d40: Already exists
869df5e15cd7: Pull complete
f9485c4e0620: Pull complete
71b0c8654d91: Pull complete
9e2edc2b6d62: Pull complete
46df2840d0d1: Pull complete
166680701bbc: Pull complete
b61497db71de: Pull complete
Digest: sha256:c6313f55137d9cc81cdfa5747a937a3d836ad961a065893babda2b09184a4fcf
Status: Downloaded newer image for gcr.io/cluster-api-provider-vsphere/release/manifests:v0.5.2-beta.0
Generated ./out/humblelab-k8s-workload-1/addons.yaml
Generated ./out/humblelab-k8s-workload-1/cluster.yaml
Generated ./out/humblelab-k8s-workload-1/controlplane.yaml
Generated ./out/humblelab-k8s-workload-1/machinedeployment.yaml
Generated /build/examples/default/provider-components/provider-components-cluster-api.yaml
Generated /build/examples/default/provider-components/provider-components-kubeadm.yaml
Generated /build/examples/default/provider-components/provider-components-vsphere.yaml
Generated ./out/humblelab-k8s-workload-1/provider-components.yaml
WARNING: ./out/humblelab-k8s-workload-1/provider-components.yaml includes vSphere credentials

The files we care the most about at this point are…

./out/humblelab-k8s-workload-1/cluster.yaml
./out/humblelab-k8s-workload-1/controlplane.yaml
./out/humblelab-k8s-workload-1/machinedeployment.yaml
./out/humblelab-k8s-workload-1/addons.yaml

Applied in this order, we will be get a workload cluster delivered by Cluster API. What’s also great about this approach is if we want to deploy another cluster - it’s as simple as generating a new set of manifest and updating them with the values we want. Using the manifest generation command from earlier, we can pull in values that we set in our envvars.txt (or whatever you named the file). If you want to change those default values - it’s an easy as editing the YAML files directly and saving then applying them to the Cluster API management node.

There’s A LOT of information in these files that allow Cluster API to do it’s magic. In the future, we can deep dive more into that (if Scott Lowe, who knows infinitely more than me about it, doesn’t beat me to it first!) but for now, we’ll do a high level overview of the purposes of theses files.

cluster.yaml

The cluster.yaml is the resource definitions for the actual cluster. This file contains the secrets for authenticating to vSphere as well as the various provider images tha will be attached. For example, the CSI driver definitions live in this file. We also define out our networking parameters for clusters. Note - we do NOT apply our CNI plugins at this point (i.e. Calico). We can think of this file is the foundation needed to deploy the actual infrastructure to the environment for the our workload cluster.

controlplane.yaml

The controlplane.yaml file contains the resource definitions for the control node, or the master nodes of the cluster. As mentioned previously, Cluster API on vSphere doesn’t currently support HA masters - but if we were to try and deploy multiple masters, they would be defined in this file. We can apply specific kubeadm parameters in this file to customize the characteristics of the cluster thats deployed as well as the image repositories we want to use.

From an actual deployed machine standpoint we also configure the template to use, sizing, network, and SSH-Key information in this file.

machinedeployment.yaml

While the controlplane manifest is specific to the control plane, the machinedeployment.yaml is the configuration details for the worker nodes in the cluster. Here, we set the amount of nodes to deploy (“replica” key), configuring their sizing profiles, network details, and authentication credentials. All the same as the controlplane.yaml; but again, instead of the masters - this deploys our worker nodes.

Something truly awesome to consider about this file is considering what happens if you want to expand an existing workload cluster. In the Ansible example I talked about earlier, I have to deploy out my workload machine, add it to the group in Ansible, and rerun the playbook. Not a massively labor intensive task - but certainly not as simple as changing a “replica: 3” to a “replica: 4” and reapplying the manifest. In this example, Cluster API will see our desired state, automatically deploy the workload and join it to the workload cluster in question. In addition to that - it’ll be deployed in a standardized way, the same way all other nodes were. Cluster lifecycle becomes as easy as updating desired state and applying the manifest.

# Below only represents a small portion of the machinedeployment.yaml file
apiVersion: cluster.x-k8s.io/v1alpha2
kind: MachineDeployment
metadata:
  labels:
    cluster.x-k8s.io/cluster-name: workload-cluster-1
  name: workload-cluster-1-md-0
  namespace: default
spec:
  replicas: 4

addons.yaml

The addons.yaml file allows us a standardized way to deploy additional components to our clusters. In the generated manifest examples, the Calico networking CNI is applied. Without applying some form of a CNI, many of the necessary pods for functionality won’t deploy, and our worker nodes will never enter into a ready state.

Applying the Workload Cluster

With our manifests generated for our humblelab-k8s-workload-1 cluster, and our kubeconfig exported from earlier, we can start applying our manifests to create our workload cluster. We’ll start off by applying both our cluster.yaml and controlplane.yaml files.

codydearkland@Codys-MBP$ kubectl apply -f ./out/humblelab-k8s-workload-1/cluster.yaml
cluster.cluster.x-k8s.io/humblelab-k8s-workload-1 created
vspherecluster.infrastructure.cluster.x-k8s.io/humblelab-k8s-workload-1 created
codydearkland@Codys-MBP$ kubectl apply -f ./out/humblelab-k8s-workload-1/controlplane.yaml
kubeadmconfig.bootstrap.cluster.x-k8s.io/humblelab-k8s-workload-1-controlplane-0 created
machine.cluster.x-k8s.io/humblelab-k8s-workload-1-controlplane-0 created
vspheremachine.infrastructure.cluster.x-k8s.io/humblelab-k8s-workload-1-controlplane-0 created

Our first sets of resources have been created, and have come up in vCenter!

In the screenshot above we can see our management cluster we created earlier, as well as our new master node for our workload cluster. Let’s continue to apply…

By default the replica value is set to 1 in the machinedeployment.yaml. Let’s set this to 2.

# Below only represents a small portion of the machinedeployment.yaml file
apiVersion: cluster.x-k8s.io/v1alpha2
kind: MachineDeployment
metadata:
  labels:
    cluster.x-k8s.io/cluster-name: humblelab-k8s-workload-1
  name: humblelab-k8s-workload-1-md-0
  namespace: default
spec:
  replicas: 2

We’ll run our kubectl apply against the machinedeployment.yaml, and observe the results!

codydearkland@Codys-MBP$ kubectl apply -f ./out/humblelab-k8s-workload-1/machinedeployment.yaml
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/humblelab-k8s-workload-1-md-0 created
machinedeployment.cluster.x-k8s.io/humblelab-k8s-workload-1-md-0 created
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/humblelab-k8s-workload-1-md-0 created

We can see the resources created, and if we look in vCenter, we’ll see our additional nodes coming up!

Now, you might be wondering at this point “How do I access this cluster?”. The clusters kubeconfig is actually stored as a secret in the ClusterAPI management cluster. We can use the following command to dump our kubeconfig out to a file (again, taken from the CAPV getting started guide!)

kubectl get secret humblelab-k8s-workload-1-kubeconfig -o=jsonpath='{.data.value}' | \
{ base64 -d 2>/dev/null || base64 -D; } >./out/humblelab-k8s-workload-1/kubeconfig

It’s important to note that every time you create a cluster using Cluster API the kubeconfig will be stored as a secret in the management cluster.

Now, if we issue the following command…

kubectl --kubeconfig ./out/humblelab-k8s-workload-1/kubeconfig get pods -A

You’ll notice several of our pods are in a pending state.

NAMESPACE     NAME                                                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-mrzq2                                          0/1     Pending   0          33m
kube-system   coredns-5c98db65d4-pn2vg                                          0/1     Pending   0          33m
kube-system   etcd-humblelab-k8s-workload-1-controlplane-0                      1/1     Running   0          32m
kube-system   kube-apiserver-humblelab-k8s-workload-1-controlplane-0            1/1     Running   0          33m
kube-system   kube-controller-manager-humblelab-k8s-workload-1-controlplane-0   1/1     Running   0          33m
kube-system   kube-proxy-bw277                                                  1/1     Running   0          33m
kube-system   kube-proxy-q9mzn                                                  1/1     Running   0          11m
kube-system   kube-proxy-xk72n                                                  1/1     Running   0          11m
kube-system   kube-scheduler-humblelab-k8s-workload-1-controlplane-0            1/1     Running   0          32m
kube-system   vsphere-cloud-controller-manager-r22hw                            1/1     Running   0          30m
kube-system   vsphere-csi-controller-0                                          0/5     Pending   0          30m

And if we issue a get nodes, we’ll see we’re in a not ready state

codydearkland@Codys-MBP$ kubectl --kubeconfig ./out/humblelab-k8s-workload-1/kubeconfig get nodes
NAME                                      STATUS     ROLES    AGE   VERSION
humblelab-k8s-workload-1-controlplane-0   NotReady   master   35m   v1.15.3
humblelab-k8s-workload-1-md-0-bf99h       NotReady   <none>   13m   v1.15.3
humblelab-k8s-workload-1-md-0-j7ctq       NotReady   <none>   13m   v1.15.3

We can fix this by applying our addons.yaml. Up until now, all of the manifests we’ve applied have been directly to the ClusterAPI management node. This one needs to be applied to the workload cluster we created.

kubectl --kubeconfig ./out/humblelab-k8s-workload-1/kubeconfig apply -f ./out/humblelab-k8s-workload-1/addons.yaml

Once we apply that, we’ll see a number of messages scroll indicating that all the things that make Calico functional are applied to our cluster.

After a few minutes, we can our get pods again and see much happier results…

NAMESPACE     NAME                                                              READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7587c6499f-bjmx5                          1/1     Running   0          79s
kube-system   calico-node-bq8xg                                                 1/1     Running   0          80s
kube-system   calico-node-dnbd2                                                 1/1     Running   0          80s
kube-system   calico-node-mv7gq                                                 1/1     Running   0          80s
kube-system   coredns-5c98db65d4-mrzq2                                          1/1     Running   0          42m
kube-system   coredns-5c98db65d4-pn2vg                                          1/1     Running   0          42m
kube-system   etcd-humblelab-k8s-workload-1-controlplane-0                      1/1     Running   0          41m
kube-system   kube-apiserver-humblelab-k8s-workload-1-controlplane-0            1/1     Running   0          41m
kube-system   kube-controller-manager-humblelab-k8s-workload-1-controlplane-0   1/1     Running   0          41m
kube-system   kube-proxy-bw277                                                  1/1     Running   0          42m
kube-system   kube-proxy-q9mzn                                                  1/1     Running   0          20m
kube-system   kube-proxy-xk72n                                                  1/1     Running   0          20m
kube-system   kube-scheduler-humblelab-k8s-workload-1-controlplane-0            1/1     Running   0          41m
kube-system   vsphere-cloud-controller-manager-r22hw                            1/1     Running   0          39m
kube-system   vsphere-csi-controller-0                                          5/5     Running   0          39m
kube-system   vsphere-csi-node-4jvjc                                            3/3     Running   0          65s
kube-system   vsphere-csi-node-6crdj                                            3/3     Running   0          50s
kube-system   vsphere-csi-node-867r8                                            3/3     Running   0          64s

Of specific excitement are the bottom 5 pods that have been deployed, the vSphere CCM and CSI drivers which enable the cluster to directly leverage vSphere constructs for persistent storage. We can verify that this is actually working by issuing the following command…

kubectl --kubeconfig ./out/humblelab-k8s-workload-1/kubeconfig describe nodes | grep "ProviderID"

If all things are happy, you should see a return like the below, which indicates that vSphere the nodes have been registered with vSphere. Note that this functionality is present in vSphere 6.7 u3.

 codydearkland@Codys-MBP$ kubectl --kubeconfig ./out/humblelab-k8s-workload-1/kubeconfig describe nodes | grep "ProviderID"
ProviderID:                  vsphere://422ed7c5-d7e5-a039-ea86-8497ebb8d1be
ProviderID:                  vsphere://422e1b9c-0e98-62be-a0f1-61ac1645242b
ProviderID:                  vsphere://422ea47d-7158-9278-7204-c7a9c53db7bd

Scaling Our Workload Cluster

At this point, we have a fully functional cluster. We can apply workloads to it, load-balance it using MetalLB (Sam McGeown did a great post about configuring/using MetalLB), setup Contour, all that good stuff!

Before we do too much however, earlier we talked about how easy it was to scale our cluster. We originally set our replica value to 2 in the machinedeployment.yaml. Let’s bump that to 4 and reapply the manifest and see what happens.

# Below only represents a small portion of the machinedeployment.yaml file
apiVersion: cluster.x-k8s.io/v1alpha2
kind: MachineDeployment
metadata:
  labels:
    cluster.x-k8s.io/cluster-name: humblelab-k8s-workload-1
  name: humblelab-k8s-workload-1-md-0
  namespace: default
spec:
  replicas: 4
codydearkland@Codys-MBP$ kubectl apply -f ./out/humblelab-k8s-workload-1/machinedeployment.yaml
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/humblelab-k8s-workload-1-md-0 unchanged
machinedeployment.cluster.x-k8s.io/humblelab-k8s-workload-1-md-0 configured
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/humblelab-k8s-workload-1-md-0 unchanged

As you can see above, our cluster resource was reconfigured with the new value. If we switch into vCenter, we can see that our 2 nodes are being provisioned.

And finally, after a few moments, we can issue a get nodes command to see the new nodes joined to our cluster. We can also issue our describe nodes command and grep the ProviderID to see that these new nodes have been registered against the vSphere CCM

codydearkland@Codys-MBP$ kubectl --kubeconfig ./out/humblelab-k8s-workload-1/kubeconfig get nodes
NAME                                      STATUS   ROLES    AGE   VERSION
humblelab-k8s-workload-1-controlplane-0   Ready    master   56m   v1.15.3
humblelab-k8s-workload-1-md-0-67f85       Ready    <none>   96s   v1.15.3
humblelab-k8s-workload-1-md-0-bf99h       Ready    <none>   33m   v1.15.3
humblelab-k8s-workload-1-md-0-j7ctq       Ready    <none>   33m   v1.15.3
humblelab-k8s-workload-1-md-0-szdqp       Ready    <none>   96s   v1.15.3
codydearkland@Codys-MBP$ kubectl --kubeconfig ./out/humblelab-k8s-workload-1/kubeconfig describe nodes | grep "ProviderID"
ProviderID:                  vsphere://422ed7c5-d7e5-a039-ea86-8497ebb8d1be
ProviderID:                  vsphere://422e1a8e-b770-73f9-1818-16144ca279bf
ProviderID:                  vsphere://422e1b9c-0e98-62be-a0f1-61ac1645242b
ProviderID:                  vsphere://422ea47d-7158-9278-7204-c7a9c53db7bd
ProviderID:                  vsphere://422ece5e-fce7-bc18-145c-7684536de4a6

Wrapping Up

I spent a lot of hours tweaking the Heptio Wardroom Ansible playbooks to work the way I wanted to in my lab, including the configurations for the vSphere CCM and CSI plugins as well as the general updates to bump to newer versions of Kubernetes. Those playbooks still have a very real place, because they are designed to deploy a highly available Kubernetes cluster that is conformant to CNCF standards.

That being said - it’s incredibly exciting to see how easy it is to leverage Cluster API to deploy clusters into an environment, and to get a glimpse into the future of what standard Kubernetes deployments will look like. The infrastructure aspect becomes part of the declarative state of my environment. I tell Cluster API I want another Kubernetes cluster, and it decides what it needs to do to get me that cluster.

Cluster API is going to be powering a number of really important things in the VMware portfolio in the near future, and it’s pretty exciting to see how far it’s come along at this point. It’s also incredible to see the rate of change week over week and the new functionality coming through.

Thanks for taking the time to read this incredibly long post, and my first post in my new role with the Cloud Native Applications Business Unit!