Deploying Kubernetes Dashboard in the Lab · TheHumbleLab

Deploying Kubernetes Dashboard in the Lab

· Read in about 4 min · (793 words) ·

Quickly Setting up the Kubernetes Dashboard

In the process of writing my post around setting up VMware Dispatch I found that I was doing a lot of digging around setting up Kubernetes Dashboard. I felt like this would be a great additional post for those of you who are starting to dive into managing/configuring/deploying Kubernetes more. The dashboard is a huge help in keeping an eye on your environment, and is pretty easy to setup. This guide is intended to be used in a lab, so I’d highly recommend if you are taking this into production to follow the formal documentation to ensure you are setting up your access policies correctly.

Getting Started

From our Kubernetes master (who is also our worker) we can run the below to install the manifest directly from GitHub.

kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

We’ll need to create the appropriate RBAC rules to get us access to the cluster, we can do that with the following (assuming your admin.conf is in the default location)

cat <<EOF | kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
EOF

With that in place, we can work on accessing the Dashboard. We’re not going to dig totally into Kubernetes architecture here; but for sake of discussion, Kubernetes has a few different ways to expose services. Currently our Kubernetes Dashboard is exposed via a ClusterIP. We want to switch this to a NodePort so that a port is exposed on the worker node that the dashboard is currently on. In my case for the lab, my master IS a worker; so it’s just going to expose a port on that. You can think of this a lot like (in fact just like…) exposing a port directly on a Docker host.

Run a kubectl edit service kubernetes-dashboard -n kube-system and we can edit our service inline. When we make changes here and save them, they will be applied immediately to the cluster. You’ll be presented with a yaml that looks like the below:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: 2018-01-14T09:34:39Z
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "388"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 24857187-f90e-11e7-8221-005056b2a54b
spec:
  clusterIP: 10.109.43.190
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Scroll down to towards the bottom in the spec section where it says ClusterIP, and change it to say NodePort, the yaml should look like the below:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: 2018-01-14T09:34:39Z
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "388"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 24857187-f90e-11e7-8221-005056b2a54b
spec:
  clusterIP: 10.109.43.190
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

Do a :wq! to save, and it should update. If you do a kubectl get services kubernetes-dashboard -n kube-system you should see the below; indicating the NodePort setup was successful. Take note of what is shown beside 443; that is the port you’ll be connecting to (31057 in my case)

New Blog Design

Now, we need to get our token so we can access the dashboard from a browser. We didn’t setup our certs correctly (this being a lab and all…), so you’ll want to use something like Firefox where you can add an exception to allow it to load anyways. In kubernetes run a kubectl describe serviceaccount kubernetes-dashboard -n kube-system and copy the name of the token. For me, this token was kubernetes-dashboard-token-7z6vk.

New Blog Design

Next we’ll run kubectl describe secrets kubernetes-dashboard-token-7z6vk -n kube-system which will dump the secret as seen below. Copy the whole token.

New Blog Design

Launch Firefox and browse to https://ipaddressfornode:port and you should be greeted with the Kubernetes dashboard login after accepting the certificate warnings.

New Blog Design

Paste the token in, and you’ll be greeted with the Kubernetes Dashboard.

New Blog Design

Conclusion

It took me a whle to find all the steps to get this lined up right. Having the Kuberenetes Dashboard setup is a huge help if you don’t want to stick to pulling infromation out about your deployment from the kubectl command line tool. It’s nice having a central UI to manage the whole environment, even if it’s just a single cluster in my case :)