KIND and Load Balancing with MetalLB on Mac

KIND and Load Balancing with MetalLB on Mac

· Read in about 8 min · (1641 words) ·

Introduction

Obviously with the new role - I’m doing a lot in the world of Kubernetes (I should hope so, otherwise I’m doing the way wrong job…). A lot of times, I’ll want to test out some thing against a simple cluster. Minikube is generally “OK” as far as generating a single node Kubernetes cluster locally - but I really like the ability work against multiple nodes for some cases (i.e. I was doing some experimentation with daemonsets recently). Enter KIND! KIND (or Kubernetes-in-Docker) is one of the Kubernetes SIG projects (Special Interest Groups), and represents a tool for deploying Kubernetes clusters inside of Docker. Super useful for doing Kubernetes things locally!

A ways back, I had discovered MetalLB as a method for getting an easy load balancer on-premises for Kubernetes. In the public cloud world - getting services into a cluster is pretty easy. Likewise, when using VMware PKS - it’s also easy. PKS requires NSX-T however, which at times can be a bit heavy (putting it mildly) for more lightweight clusters - and ultimately operating locally (i.e. on MacBook) is just out of the question when it comes to PKS. Again, all of this brought me back to MetalLB as an option. To make things even more interesting - my good buddy Sam McGeown did a blog post around using MetalLB with Contour. I used that on several cluster builds in my lab so I figured that would translate over nicely!

The problem I found however is with how MacOS handles Docker. Immediately upon starting to research, I found that the Great and Powerful Duffie Cooley had done a blog on just this topic, but from the Linux point of view. In the Linux world, the docker0 bridge network is directly connected - allowing you to interact from a network perspective seamlessly. On MacOS, the Docker platform is actually a type of virtual machine living inside Hyperkit. This interface is exposed very differently and you can’t directly access it. In reality, on MacOS you’re accessing Docker via an exposed socket.

Typically this isn’t a huge deal. We can (and let me be totally clear, absolutely should, if at all possible) use kubectl to proxy connections to specific ports. This is what it’s there for, any anything I’m going to talk about in this post is hacky at best. Be warned!

(Note: Another option is deploying a VPN into your Kubernetes environment and then connecting to that VPN to hit the Cluster networks.)

That being said, what happens though when there are several services you want to expose? You can put them behind some sort of ingress and proxy that ingress address, but I can absolutely see times where you’d want multiple load balancers spun up against a KIND cluster. Fortunately, I wasn’t the only one looking at how to do this, and someone else far smarter solved it!

The Solution

Ultimately what was needed was a way to hit the docker0 bridge network. Hyperkit supports this functionality through a specific set of additional arguments used during the creation of the machine. This isn’t possible out of the box since it’s actually Docker that’s creating the machine, and the commands are hard-coded in that way. While digging - I discovered a GitHub project that was working on this specific use case for Docker - docker-tuntap-osx.

This shim install allows a bridge network to be created between the host and guest machine. Subsequently, a gateway address is created that can then be used to route against to hit cluster services inside the docker networks.

There are caveats however…

  • It’s hacky and unsupported, and you should use kubectl proxy if at all possible
  • Every time your machine restarts you’ll need to reapply the shim and restart docker
  • I experienced having to remove the static route and re-add after periods of non-use. The route would be there but it suddenly wouldn’t work.

Let’s dive in!

Getting Started

All and all, this is a pretty quick thing to pull off. In order to knock this out, we’re going to do the following

  • Clone down the repo I covered above - AlmirKadric-Published/docker-tuntap-osx
  • As mentioned in the instructions within that GitHub, use brew to install tuntap (brew tap caskroom/cask followed by brew cast install tuntap). Yo may need a restart after this
  • Exit out of Docker for Mac
  • Once these 2 things are complete, we can execute the shell script, ./sbin/docker_tap_install.sh. It’s important to NOT execute this command with sudo. If you execute it with sudo, the interface will be created under the root user, and the functionality will not work.
  • Once the tap is installed, we will bring the interface up
  • We can assign a static route against the gateway on that interface to provide routing into the our KIND environment, and ultimately MetalLB.
  • Finally - we’ll install/configure MetalLB into our Kubernetes cluster

As usual, you should always be wary about executing arbitrary scripts. I’d highly recommend reviewing the script to ensure you’re comfortable with what it’s doing.

Execute the ./sbin/docker_tap_install.sh script

./sbin/docker_tap_install.sh
Installation complete
Restarting Docker
Process restarting, ready to go

Once Docker finishes restarting, you can grep your interfaces looking for tap to see that the tap interface has been created.

ifconfig | grep "tap"
tap1: flags=8842<BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500

With this in place, we can run the interface “up” script in order to bring the network up. Note:, you’ll want to modify this script if you want to change the gateway address that comes up. You’ll be using this address to assign your static routes against.

Execute ./sbin/docker_tap_up.sh and run ifconfig. If we scroll to the last interface, it should be tap1, and you should see the network assigned

tap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    ether 12:68:9b:00:c2:22
    inet 10.0.75.1 netmask 0xfffffffc broadcast 10.0.75.3
    media: autoselect
    status: active
    open (pid 11096)

Finally, we’ll add our static route to what will eventually be our MetalLB network…

 sudo route -v add -net 172.17.255.1 -netmask 255.255.255.0 10.0.75.2

With this configured, we should be ready to setup our cluster and MetalLB!

Deploying our Cluster with KIND

Eric Shanks (who is actually in the process of joining our Kubernetes architect team here at VMware) dropped a blog post around A Kind Way to Learn Kubernetes. It’s a great read on the in’s and out’s of getting KIND up and running and. Knowing that that’s there to read - I’m going to be pretty brief in how to get our cluster up and running.

cat << EOF > config.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF
kind create cluster --config config.yaml

If all goes well, you should see results similar to below…

kind create cluster --config config.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.15.3) đŸ–ŧ
 ✓ Preparing nodes đŸ“ĻđŸ“ĻđŸ“ĻđŸ“Ļ
 ✓ Creating kubeadm config 📜
 ✓ Starting control-plane 🕹ī¸
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Cluster creation complete. You can now use the cluster with:

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info

And finally if we export our kubeconfig and run a get nodes we can see our cluster is up and running

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl get nodes
NAME                 STATUS   ROLES    AGE     VERSION
kind-control-plane   Ready    master   2m30s   v1.15.3
kind-worker          Ready    <none>   116s    v1.15.3
kind-worker2         Ready    <none>   116s    v1.15.3
kind-worker3         Ready    <none>   116s    v1.15.3

Great, our cluster is up and running! Let’s get MetalLB setup!

Configuring MetalLB

MetalLB has a great set of documentation for getting started.

We’ll simply execute the following command (with our kubeconfig exported)

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml

This should ultimately create a number of resources within your cluster, you can run a get pods against the metallb-system namespace (kubectl get pods -n metallb-system) to see the resulting created created pods.

With these resources created, we’ll now need to setup the actual configuration by deploying a configmap. In MetalLB, we can either deploy our Load Balancing configuration in Layer 2 mode or using BGP. Since we’re doing this all locally, it doesn’t really make sense for us to peer into BGP. We’ll rock us some L2.

Earlier when we defined out our static route, you’ll notice I used the 172.17.255.0 network as our load balancer network. Our static route is setup to send any requests to that network through the tap interface we configured.

Create and apply the following configmap

cat << EOF > metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.17.255.1-172.17.255.200
EOF
kubectl create -f metallb-config.yaml

And our cluster should be ready!

Deploying an Application

I’ve got a really silly application I threw together when I was in CMBU (specifically, a VMworld 2019 branch) that uses a load balancer resource that we can use to test this out.

git clone https://github.com/codyde/cmbu-bootcamp-app -b vmworld2019
kubectl create namespace cody-demo
kubectl apply -f cmbu-bootcamp-app/kubernetes-demoapp.yaml

After a few moments, you should be able to run a kubectl get svc -n cody-demo which will list all exposed services in the cluster. If all things went well, you should see the a deployed load balancer!

kubectl get svc -n cody-demo
NAME       TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)        AGE
db         ClusterIP      10.108.75.155    <none>         5432/TCP       2m25s
frontend   LoadBalancer   10.107.228.233   172.17.255.1   80:30731/TCP   2m25s
pyapi      ClusterIP      10.97.8.23       <none>         80/TCP         2m25s

Observe our frontend service behind a load balancer. Finally, if we hit it in a browser, we should have our page return!

Wrapping Up

Ultimately - using kubectl proxy command would’ve been a much simpler way to get access to this application. In reality, we should have deployed an ingress controller like Contour and proxy into that ingress, then hitting our backend resources via an HTTPRoute.

That said - being able to show, and access, a dynamic load balancer resource on KIND absolutely has its uses. Hopefully this helps someone else along the way! Enjoy!