What's in the Lab: 2018 Edition

What's in the Lab: 2018 Edition

· Read in about 10 min · (2112 words) ·

<img src=”/images/2018-02-26-whats-in-the-lab-2018/01-homelab-summary.PNG#center” alt="vCenter Summary” style="width 700px”;>


I've been a huge proponent of having a homelab in one way or another for the better part of the past 10 years. In my current role, I have access to a ton of different types of virtual environments, but theres no substitute for having an environment that you can tinker in and play with end to end. Furthermore; I'm a huge believer in having experience setting up and configuring the solutions from the ground up.

It's obviously changed a lot in complexity and solutions over the years, especially as I've changed jobs. I've gone from VMware Workstation based labs, to pure Whitebox (custom host build) labs running on strictly consumer grade hardware, to server grade hardware and every step in between. I've had a lab that consumed 50w of power, and a lab that consume 1000w of power.

Ultimately in order to sustain an adequate balance of WAF (Wife Approval Factor) as well as technical capability, compromises have to be made. There's a number of angles I needed to look at for the lab in order to satisfy my functional goals as well as learning goals.

  • Server Hardware
  • Storage
  • Networking
  • Software Solutions

Lab Overview - Server Hardware

Management Cluster

  • 1 Whitebox Server Build
  • Dual E5-2670 CPUs
  • 128gb of RAM
  • 8 Drive Bay comprised of 4x 3tb Drives and 3 500gb Samsung EVO 850 SSD's
  • Xpenology VM for Storage (Soon to be replaced with physical Synology Array)
  • Docker running within Synology for various services <img src=”/images/2018-02-26-whats-in-the-lab-2018/02-management-cluster.PNG#center” alt="Management Cluster” style="width 700px”;>

Tenant Cluster

  • 2 x Dell R710's
  • Dual L5630 CPU's in Each
  • 1 w/ 104gb RAM, 1 w/ 72gb RAM (Haven't gone in and balanced it out yet…)
  • Each server has a EMC LSI Warpdrive PCIe SSD (300gb) and a Samsung Evo 960 (1TB)
  • 10gbe NIC Direct-Connect Configuration <img src=”/images/2018-02-26-whats-in-the-lab-2018/03-tenant-cluster.PNG#center” alt="Tenant Cluster” style="width 700px”;>

I run 2 clusters in my environment. A Management and a Tenant cluster.

The Management cluster has a storage VM running Xpenology, a community build of the popular Synology DSM software. I pass the hosts storage controller through to the Xpenology VM, giving the VM direct access to the disks to manage. More on this later. This cluster only has a single host, so obviously I accept some level of risk there as a single point of failure. I keep what I consider “core” systems on this host but am slowly moving those over to the 2-node Tenant cluster, because of the redundancy.

The Tenant cluster is running a 2-node vSAN configuration, The 10gbe NICs are configured to carry the vSAN traffic while the witness traffic heads out over the management ports. The bulk of my workloads run in this cluster since the majority of memory capacity is here, and I have 2 hosts to move systems between. The performance is great with most VMs carrying little to no storage latency.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/16-2node-vsan.PNG#center” alt="2-Node vSAN” style="width 700px”;>

The Witness VM for vSAN is deployed in the Management cluster, but lives in a separate “Datacenter” construct named witness.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/04-environment-summary.PNG#center” alt="Environment Summary” style="width 700px”;>

Lab Overview - Storage

We touched on this lightly, so I'll make this a quick summary.

I leverage Xpenology in my primary environment with the storage controller on my Management host set to pass-through to the Xpenology VM. I won't get into the details on how to setup or consume Xpenology because while it's pretty awesome in my environment now - going that route isn't for the faint of heart.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/05-xpenology-storage.PNG#center” alt="Xpenology Summary” style="width 100px”;>

I have a few iSCSI LUNs setup in Xpenology as well as an NFS (4x3tb drives) share that stores all my home content, as well as an SSD (3x500gb Samsung EVO 850) LUN setup for VMs that don't live on vSAN today. The disks are effectively all in raid 5 which gives me a little bit of failure tolerance which is comforting since I'm not using enterprise grade drives.

From a vSAN perspective, all VMs on my Tenant cluster are exclusively using vSAN because, well, why not?

<img src=”/images/2018-02-26-whats-in-the-lab-2018/06-vsan-capacity.PNG#center” alt="vSAN Summary” style="width 700px”;>

Since I'm leveraging all flash in my vSAN I'm able to take advantage of de-duplication and compression on the data which saves me a decent little chunk of space.

Lab Overview - Physical Networking

Like many homelabbers, I've fully bought into the truth of Ubiquiti Networks. A summary of leveraged solutions is below:

  • Unified Security Gateway (AKA USG, essentially my core router/security appliance)
  • Unifi 8 port POE Switch (office switch, connected directly to the USG)
  • Unifi AC UAP - Office AP connected to the POE ports on the Unifi Switch
  • Unifi AC UAP - Living Room AP
  • EdgeSwitch 24 - 24 Port Switch sitting out in the garage with the rest of the server hardware.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/09-ubnt-summary.PNG#center” alt="Unifi Summary” style="width 700px”;>

From a topology perspective, I have 3 VLANs deployed on the USG which automatically configures those VLANs onto the Unifi Switch. I have a single “trunk” line that runs from my office out into the garage connecting to my EdgeSwitch. All servers connect into the EdgeSwitch where I also define out the VLANs that were created on the USG as needed.

The 3 networks are

  • Common Services Block - Network for general systems that I didn't want to run within NSX.
  • NSX VTEP VLAN - Contains All VTEPs for NSX
  • NSX Uplink VLAN - Network specifically for uplinks from my NSX Edge Services Gateway (ESG) to my physical network

From here, I use the USG to also do dynamic routing via BGP (it can do OSPF too) between the ESG and the physical network. Within NSX I have a few networks defined out for demonstrating logical networking. DMZ, Web, App, and DB. NSX is a story for another time though!

I decided to go “all in” on the Unifi route because I really liked the traffic visualization provided by the controller. I run the controller in a Docker container on my Xpenology array, with the config directories set as persistent volumes dumping into the NFS share.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/07-unifi-controller.PNG#center” alt="Unifi Summary” style="width 700px”;> <img src=”/images/2018-02-26-whats-in-the-lab-2018/08-unifi-controller-dpi.PNG#center” alt="DPI Summary” style="width 700px”;>

Lab Overview - Software Solutions

As a Core Systems Engineer with VMware, I have this fundamental belief that if I'm going to show a product - I need to have installed/configured it before. To that point, I do ALL of my customer demo's from my lab environment. If I don't have it in my lab - I'm not demoing it. Not everyone does things that way and I'm not saying I'm “right” for it, it's just how I operate.

To that end, I'm running the following stacks in my lab currently:

  • vCenter 6.5 with an External PSC (duh)
  • vSphere Integrated Containers 1.3
  • vRealize Operations 6.6.1
  • 2-Node vSAN
  • vRealize Automation 7.3 (with the epicly awesome SovLabs plugins, props for hooking up #vExperts)
  • vRealize LogInsight
  • Horizon View 7.4 leveraging USG and vIDM
  • NSX 6.4 for DFW and Logical Networking
  • vRealize Network Insight
  • AppVolumes
  • Several different Kubernetes Configurations
  • Several various Docker/Swarm hosts/clusters
  • Veeam for Data Protection (props to Veeam for hooking up #vExperts)
  • Infoblox for IPAM and DNS
  • F5 Big IP

There's a lot of details here that I could cover, so I'll be very brief on some of the ones that I haven't covered yet

vSphere Integrated Containers 1.3

VIC 1.3 was a HUGE leap forward for the platform, and I love having it available in my lab. The new GUI makes deploying new VIC Hosts incredibly quick and easy - and the way the new plugin functions really makes VIC feel like a first class citizen in the vSphere platform. There are a ton of fun usecases for VIC - stay tuned as I tend to go on “Lets hack at VIC” sprees quite often.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/15-vic13.PNG#center” alt="VIC 1.3” style="width 700px”;>

vRealize Automation 7.3

I'm a huge vRA nerd; anyone who knows me knows this. I'm doing a ton of things with vRA at any given time, usually focused on XaaS and evangelizing the “it's not as hard as everyone makes it out to be” story. The SovLabs plugins are great and do a lot to simplify the environment. I highly recommend checking them out, as well as keeping an eye out on my blog because I have several half written posts that I'm going to drop out here shortly. I also leverage the infoblox integration for IPAM.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/10-vra-catalog.PNG#center” alt="vRA Catalog” style="width 700px”;>

vRealize LogInsight

LogInsight is by far one of the most underrated products we have. I have all my syslog's going into LogInsight to do analytics and log aggregation. Its so much more than a log search engine

<img src=”/images/2018-02-26-whats-in-the-lab-2018/11-loginsight.PNG#center” alt="LogInsight Summary” style="width 700px”;>

vRealize Operations 6.6.1

No environment would be complete without a solid vROPS installation keeping an eye on things. vROPS has saved my butt in my lab more times than I can count, calling out a ton of little things that I'd normally miss. It's a pretty great tool - and the most recent release has really stepped it up in quick usability.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/14-vrops.PNG#center” alt="vROPS Summary” style="width 700px”;>

Horizon View 7.4

I'm using Horizon with non-persistent VDI leveraging Instant Clones for Windows 10. I use AppVolumes in a very basic configuration currently. These all map back to my vSAN datastore. Unified Access Gateway lives on an NSX network thats isolated out to prevent bad things from happening. I have a NAT setup from my UAG on the edge down into the USG, and then into VIDM. You can see it by hitting https://view.humblelab.com. This configuration gives me remote access to my lab at times where I can't hit the VPN provided by my USG.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/12-horizon.PNG#center” alt="Horizon Summary” style="width 700px”;>

NSX 6.4

NSX is setup for logical networking and has the majority of my “VLANs” setup in it. My firewall is pretty loose currently, with the exception of the external connectivity via the UAG in my “DMZ”. I'll be leveraging vRealize Network Insight in the coming weeks to tighten this up as part of a “training” exercise for customers.

<img src=”/images/2018-02-26-whats-in-the-lab-2018/13-nsx.PNG#center” alt="NSX Summary” style="width 700px”;>


I've got a number of fun projects running around on Kubernetes currently, specifically VMware Dispatch and OpenFaaS. I'm using these for function as a service type automation - normally around interacting with API's. I want to expand this into home automation however.

As mentioned previously, im running my Unifi Controller in a container as well which has been a huge help from a portability perspective.

Whats Next?

No homelab should ever be done. There should always be a next project or next goal you are looking to dig into. For me, I've got a few things lined up…

Synology 1817+ Physical Array

<img src=”/images/2018-02-26-whats-in-the-lab-2018/17-synology-physical.PNG#center” alt="Synology Physical” style="width 700px”;>

I need to replace my Xpenology VM with the real physical Synology appliance. It's more stable, able to be patched easier, and takes my storage burden off of the management host. It's a steep price-tag but its worth it.

vRealize Suite Lifecycle Manager

<img src=”/images/2018-02-26-whats-in-the-lab-2018/18-vrslcm.PNG#center” alt="vRealize Suite Lifecycle Manager” style="width 700px”;>

I have high hopes around the lifecycle manage product. It's pretty incredible to see the amount of attention that “better installs” are getting inside VMware these days - and vRSLCM (dat acronym though…) is pretty great evidence of that. Bringing my SDDC components under a central management interface is a very good thing.

Linux Instant Clones in Horizon

<img src=”/images/2018-02-26-whats-in-the-lab-2018/19-linux-clones.PNG#center” alt="Linux Clones” style="width 700px”;>

I need to create a new pool for Linux based instant clones as they are now supported in Horizon 7.4. I'm a huge fan of Linux as a daily driver OS and having Linux VDI available for me on a whim is a good thing. #YearOfTheLinuxDesktop?

Deeper F5 Configurations

The F5 is an extremely powerful load balancer with a ton of options around it - mine is minimally configured currently. I need to start doing a lot more with it, especially around the reverse proxy capabilities


My lab has changed a lot over the years, at one point I was running 2 2-node clusters to do multi-site fail-over scenarios with SRM, NSX and vSphere replication. While the solution was incredibly cool - it was also very costly on the power meter and turned my garage into an oven. Sometimes running a simplified configuration is better.

As I mentioned previously, I extensively use my homelab for customer demonstrations. Thus far, my customers have really enjoyed digging into how its setup and really enjoy the “live demo” experience of it all.

Stay tuned; In future posts we'll start digging deeper into “how” each component is configured - and some fun homelab hacks to pull out around them!