Deploying Function as a Service with OpenFaaS, Kubernetes and VMware API's · TheHumbleLab

Deploying Function as a Service with OpenFaaS, Kubernetes and VMware API's

· Read in about 12 min · (2414 words) ·

Update 10/16/2017

The Github for the examples I share later on in this article can be found at

An Introduction to “Function as a Service”

What is Function as a Service (FaaS)? It’s the concept of a platform that will run functions “on-demand”, spinning up the resources as needed, executing the function, and destroying unused infrastructure (for the most part, more on that later). For my efforts, I’ve been leveraging OpenFaaS. OpenFaaS falls into the current trend of “Serverless” technologies; which isn’t really meant to imply “no servers involved at all”. It’s more meant to be around the concept of leveraging resources on demand; so that what’s happening from a server perspective is an after-thought. The platform is managed as code - and spins up and down dynamically as needed.

OpenFaaS has some very progressive features to it; auto-scaling, ability to leverage functions from multiple programming languages, a slick user-interface, Docker integration, built in monitoring via Prometheus (also making it easy to bolt on Grafana to collect data) just to name a few! I’ve been primarily leveraging Python and JavaScript; but I’m excited to hear about some Golang work being done as this is a space I really want to start developing in.

Tell Me More About OpenFaaS

OpenFaaS is an open source FaaS platform (go figure; being called OpenFaaS and all…). It’s currently got over 6,000 stars on GitHub and is rapidly growing in popularity. Essentially the way OpenFaaS works (with lots of other black magic buried in…) is a function is configured to run within a Docker container. This container is built using the OpenFaaS “faas-cli” commands. The lifecycle of a function follows the build -> push -> deploy -> invoke lifecycle. The container scrapes the “response” from a function and returns it back. What this means is that if you can build a function around it; you can build an “as a service” out of it. OpenFaaS runs on either Docker Swarm or Kubernetes. My implementation leverages Kubernetes, since I’ve been doing a lot of prep work for it regarding our internal Kubernetes initiatives - but the easier path to get started with is Docker Swarm.


Since i’m leveraging Kubernetes (the specific deployment is here; CFCN formerly known as Kubo) within my environment, this post is going to be focused on the Kubernetes path.

Getting Started

Alex and his merry band of contributors have done a great job documenting out the various paths. The documentation is extremely solid; and I’m not going to go too deep into it here because it really is quite simple to get started with. The links to the necessary repositories are below.

First, you’ll want to get FaaS CLI to manage the lifecycle of your functions

Then choose your path for installing openfaas

At a high level, the process for installation looks like this..

  • Clone faas-cli git clone
  • Clone your path of OpenFaaS (faas-netes for me) git clone
  • Customize the yaml file if desired (more replicas? ingress-controller? loadbalancer?). This is optional.
  • Apply faas, monitoring, and rbac YAML files kubectl apply -f ./faas.yml,monitoring.yml,rbac.yml
  • Access service through web browser (using NodePort by default) and faas-cli

Installation with Helm

Another option is to leverage Helm to complete the installation. Helm is an up and coming package manager within the Kubernetes community. You can check their github out here. The OpenFaaS team has been hard at work on perfecting the Helm charts for the platform which simplifies the installation and configuration of OpenFaaS. The Helm chart includes functionality for setting up some of the Ingress Controller resources as well as providing install switches for the different versions of OpenFaaS (async, arm, etc…).

You can see the details for leveraging Helm with OpenFaaS here - faas-netes with Helm. I haven’t worked with Helm a ton - but expect that this is going continue to grow, and likely become the more common practice path to installation for the Kubernetes platform due to the level of configuration you can achieve on first install.

Pulling in VMWare API Functionality

The beauty of OpenFaaS is that we can take some of our previous work around the vSphere and vRA API’s and easily consume it here too. To that end; I’ve made a boilerplate repository for some of my OpenFaaS functions

vSphere and vRA Boilerplate GitHub

If you clone that repo down, you’ll notice you are provided the following

When we review the stack.yml file we see a lot of entries that look like the following

  name: faas

    lang: python
    handler: ./vra-token/
    image: yourdockerhub/vra-token
      user: youradmin@vsphere.local
      pw: yourpass
      tenant: vsphere.local

These effectively correspond to docker containers which are created and pushed up to Dockerhub when you build your functions. We are able to feed in environment variables here that will be automatically passed into our container as its built. Conversely; Alex from OpenFaaS recommends (for good reason…) leveraging environment files where it makes sense as well. This is applicable to things like usernames and a passwords. This allows us to include an environments.yml file that would be consumed to provide the data for the above. You can read more about that on the openfaas documentation, under the secrets section.

Within this stack.yaml file, we will need to update it with

  • The functions we are going to be using (if you are going to add anymore; if not, don’t worry about it)
  • Appropriate IP Address and port for our OpenFaaS gateway service in K8s
  • The path to your Docker Hub account and container name. (Yes, you’ll need a Docker Hub account. They are free. You’ll also need to make sure you do a “docker login” on your terminal and provide the credentials).
  • Your vCenter and vRA details

This file effectively manages the functions within your environment when interacting with faas-cli. if we do a faas-cli build -f stack.yml when faas-function directory, it will step through each of these containers and build it using a docker build command. Next, if we do a faas-cli push -f stack.yml the file is pushed up to Docker Hub and into the repository you setup. Finally, when we run faas-cli deploy -f stack.yml faas-cli will deploy our functions into openfaas, adding them to our gateway for consumption. If we log into our gateway service over the 31112 (default) nodePort (note in my environment I have NSX load balancing my nodes; which has a NAT in place to allows communication over port 8080 instead…technology is fun :)) you should see something similar to the picture below…

Gateway Deployed

If we click on one of the functions, such as vra-resources, we are presented with a screen that lets us run the function from within a GUI.

run VRA Function

Taking a step back to the terminal and listing out the directory, we see a that it holds all of our function (which are also stored on git here). We can do a cd vra-resources to get into the directory where the code lives. We can see that it inclues a single python function called, and when we run a cat we can see it’s the function to actually do a REST call for both our authentication as well as the API endpoint for all vRA Resources, as seen below…

VRA Function

For the sake of detail; the code within this function is below; but can be viewed in github here as well

import requests
import json
import os
from requests.packages.urllib3.exceptions import InsecureRequestWarning

requests.packages.urllib3.disable_warnings(InsecureRequestWarning)  # Disable SSL warning

def handle(req):
    Builds an authentication token for the user. Takes input of the fqdn of vRA, username,
    password, and tenant
    vrafqdn = os.environ['cloud_fqdn']
    user = os.environ['user']
    password = os.environ['pw']
    tenant = os.environ['tenant']
    url = "https://{}/identity/api/tokens".format(vrafqdn)
    payload = '{{"username":"{}","password":"{}","tenant":"{}"}}'.format(user, password, tenant)
    headers = {
        'accept': "application/json",
        'content-type': "application/json",
    response = requests.request("POST", url, data=payload, headers=headers, verify=False)
    j = response.json()['id']
    auth = "Bearer "+j
    vraheaders = {
        'accept': "application/json",
        'authorization': auth
    vraApiUrl = "https://{}/catalog-service/api/consumer/resources".format(vrafqdn)
    reqs = requests.request("GET", vraApiUrl, headers=vraheaders, verify=False).json()['content']

The important callout here is that there isn’t any major customization needed for the OpenFaaS functions to be integrated; the only major callout is the need to have the primary function being ran named “handle”, and have some data able to be passed into it (even if its not being used).

This model for consumption of code allows you to do some pretty interesting integrations. For example, it’s easy for us to leverage a project like Russell Popes vRealize-PySDK to abstract a lot of these “raw” API calls into more pythonic functions. It would be as simple as including those imports in your code that you build so that it is built into the container, and then those functions and methods would be exposed.

I used raw REST calls in my example to demonstrate interacting directly with the REST API from scratch - but ultimately in the “big picture” I’d want to leverage the SDK as it makes interacting MUCH easier. If you haven’t checked out the project; you need to!

Back to the GUI

Returning back to the OpenFaaS GUI, if we select invoke, and we configured our environment variables correctly - you should receive the following response

Invoke vRA

Notice how we’re getting object over and over again. This is because OpenFaaS is simply returning what the python function received back in text format. Since the python function returns back a JSON object; all it sees is the actual object. If we switch OpenFaaS to return a JSON back…we’re presented with something quite different!

vRA JSON Response

Very cool! It’ll actually parse out the JSON for us and show us the data. Another cool tip is using faas-cli and its invoke capability to run the function. Taking a look at an example, if we do a faas-cli list --gateway we can see all of our functions read out from the gateway we provided. Note, if we want to use a YAML file to return this data, you could use faas-cli list -f name.yml.

FaaS-CLI List

This provides us an easy way to check out the amount of invocations and replicas for each of our functions from a CLI interface. We can also use faas-cli to invoke our functions easily (as opposed to CURL which we will show a demo on shortly…). If we run a faas-cli invoke -h we can see the help file for the invoke command…

FaaS-CLI invoke help

Using the help file, its very easy for us to pull together a simple invocation against the gateway using faas-cli invoke --gateway --name vra-resources which returns us…

FaaS-CLI invoke vra-resources

As mentioned earlier, if you have users that want to call the functions without “installing” faas-cli, you could use something like CURL to return the data back. A simple curl -d '' can serve this purpose well. We can see the actual “return” back will actually come out as a readable JSON dump


This in itself isn’t always going to be a very useful thing; but consider the fact that we can actually consume these URL’s via other types of extensibility. Take my favorite tool from the toolbelt; python for example. It’s very easy for us to run a post against this URL with the standard requests library and then start to parse out the data we specifically want. Using the following sample code; we’re able to return back the name of the first object in the list

import requests
r ='', data='')

Python Example

We can even take this a step further, and loop over the JSON objects that are returned, pulling data back out.

import requests
r ='', data='')
for i in r.json():

Python Loop vRA JSON

An important think to note is that all of REST calls against OpenFaaS are actually POST requests in the REST world. I made the mistake several times of falling into the traditional verb trap of “I’m getting data so it has to be a GET”. Wrong. We always need to send up data, even if it’s blank.

Let’s take a look at another example, where we need to feed our function a value in order to return data to us.

Below, we have a JavaScript function written for NodeJS that will return the vSphere API data about a machine when you feed the hostname. It’s a fairly complex API call in that there are multiple API call’s happening within this single function…

  • Calling the vSphere REST API’s Authentication service to get a SID
  • Calling the Virtual Machine list API to return the object ID
  • Calling the Virtual Machine “single VM” API endpoint to return the individual machine in question, from the object-ID
"use strict";
var rp = require('request-promise');
var _ = require('lodash')

module.exports = (content, callback) => {
    var opts = {
        method: 'POST',
        uri: process.env.ENDPOINT + '/rest/com/vmware/cis/session',
        auth: {
            'username': process.env.USER,
            'password': process.env.PASS
        headers: {
            'Content-Type': 'application/json',
            'Accept': 'application/json',
            'vmware-use-header-authn': 'test',
            'vmware-api-session-id': 'null',
        json: true
    rp(opts).then((res) => {
        var opts2 = {
            method: 'GET',
            uri: process.env.ENDPOINT + '/rest/vcenter/vm',
            resolveWithFullResponse: true,
            headers: {
                'accept': 'application/json',
                'vmware-api-session-id': res.value
            json: true
        rp(opts2).then((res2) => {
            var vmid = _.find(res2.body.value, function (vmobj) { return == content; }).vm;
            var opts3 = {
                method: 'GET',
                uri: process.env.ENDPOINT + '/rest/vcenter/vm/' + vmid,
                resolveWithFullResponse: true,
                headers: {
                    'accept': 'application/json',
                    'vmware-api-session-id': res.value
                json: true
            rp(opts3).then((res3) => {
                callback(null, res3.body.value);

In OpenFaaS, we call the function’s URL and feed it in a VM’s hostname. Let’s use my favorite jumpbox, hlcentos19 as an example. Note that we automatically “tick” the JSON box because we know it’s going to ultimately return a JSON object…

NodeJS VC VM Example

You can see we return back data about this virtual machine from the vCenter REST API. There are a lot of practical uses for this kind of information. Think of the potential of integrating this call into a Webpage that a team can feed a hostname into to get a virtual machine specification back for some type of reporting or troubleshooting as an example. Or, think about crafting another function out of this that powers the machine of. Or one that deletes it entirely…well…maybe not that one. That one can be a resume generating event :).


You can see, it’s very easy to get started with OpenFaaS and building functions. Using the boilerplate I’ve provided, you have an easy jumping off point to start building additional REST calls into vCenter or vRealize Automation to pull data out. I promise this isn’t the last you’ll hear about Serverless and OpenFaaS on my blog!