Evolution - Thinking About My Lab

Evolution - Thinking About My Lab

· Read in about 14 min · (2878 words) ·

!!Spoiler Alert!! - This is not a “I’m moving all the things to public cloud” blog post…

Added 5/7/2019 I’ve had a few discussions since posting this blog around the implication that im just moving all my “stuff” to AWS. Totally not the case by any means! Spoiler alert; this blog post was never really about me powering off my homelab. It was about the evolution of what resources i’m using to further my learning. Some of those resources are public cloud resources like AWS, some of it is VMware Cloud on AWS, and a lot of it is pure local resources. I’m definitely not in the “I’m moving all the things to public cloud” camp by any means. I’ve always approached my labbing as “What’s the best tool for the job”. The moral of this story is that as your learning evolves; so should the tools you use to learn against. In my case; I don’t need the servers in my garage for that learning anymore. I’m certain one day that station will come back around; but for now I’m just on a different journey.

“All My Life, all I ever wanted to be was a labber…”

I’ve been a huge advocate for homelabs the majority of my IT career. I can honestly trace back every promotion I’ve ever received to activities that took place in my lab, including the role I’m in now. Independent learning has always been such an important concept to me - and the best way I had to achieve that was to “build it in my lab” first. Many of the workflows I built for vRealize Automation started in my lab at home! I advocated for including homelab work on my resume; and every time was met with positivity. Employers always appreciated the lessons I learned from the lab because it was always positioned as that - a extra way to learn.

I’ve had it a long time too. A very long time. My lab has followed me through the past 3 homes I’ve lived in. It’s grown, shrank, changed. I’ve added servers, removed servers. I’ve consolidated, changed components. I’ve changed networking. I’ve created ridiculous VLAN schemes. I’ve spent hours backing those schemes out. I did demos out of it both when I was in the field and now as Tech Marketing. It’s been a big part of my life. In an emotional way, it’s where I’ve gone when I’ve felt lost. It was a place that was totally mine to build, grow, and find confidence in myself in.

In a lot of ways; its existence has mirrored my own career. The lab changed as I changed.

What’s In My Lab Today?

Today, my lab is the following…

Hardware

  • A “Management” host. Dual E5-2670 CPU’s with 128gb of RAM
  • 2 R710’s with 2x L5630 CPU’s. One with 104gb of RAM, the other with 72gb of RAM. This is running a 2 node vSAN configuration with 10 gig directly connected between.
  • Ubiquiti networking stack (Unified Security Gateway, Edgeswitch, Unifi Switch, Unifi Aaccess Points)
  • Synology DS1817+ for Storage. Mix of SSD’s and Spinning disk. This also has the Docker package installed and I run a number of containers for various support tasks.

From a vCenter/vSphere perspective, I’m broken into a single datacenter with 3 clusters

  • Management - Single Host
  • Tenant - 2 R710s with vSAN
  • Witness - Single Witness host (appliance on Management host)

I broke them up this way in case I wanted to power off either cluster. The Management systems are the ones I do my best to keep up all the time. More on that in a few.

Software

This is an ever evolving topic so we are going to treat today as a snapshot of the current state.

In the Management cluster

  • VMware Harbor - Docker Registry
  • Photon based Docker host
  • 2 Gitlab Servers
  • A Veeam Proxy
  • My vSAN Witness Host
  • My AD Domain Controller
  • A MS SQL Server (for vRA and other databasey things)
  • A Cloud Automation Services Stargate (Cloud Proxy)
  • Some components of PKS Enterprise
  • Cloud Automation Services vRealize Orchestrator Appliance

In the Tenant Cluster

  • vRealize Automation 7.5
  • Infoblox for IPAM and DNS
  • NSX-T
  • Remaining components of PKS Enterprise
  • vCenter
  • Dispatch standalone appliance
  • Puppet Master
  • Another Cloud Automation Services Stargate Appliance
  • Log Insight (God I love that platform)
  • Network Insight proxy appliance
  • vRealize Operations
  • Veeam for backup, pointed at the Synology
  • A few utility VMs with things like postfix and telegraf on them

Evolution: Consumption vs Building

Coming aboard at VMware has been amazing. I love my job; and I love the business that I represent. vRealize Automation literally changed my life; both for my family and for myself personally and professionally. A huge lesson I learned from the platform was the value of being able to easily consume resources. Building automation is FUN to me. As I look at the lessons we’re learning as a company around multi-cloud in Cloud Automation Services, I’m finding the lessons that I’m learning are less about how to “deploy VM’s” - and more about how to configure and deliver platforms. I’m building automation for the consumption of those environments through pipelines and other delivery mechanisms, I’m building workflows, pipelines, and actions that execute in a multi-cloud world. It’s very different than racking lab servers.

When I look at what I’m spending a lot of my free time learning and playing with - it’s much less focused on building infrastructure. I still love vSphere; this is absolutely NOT a “I don’t care about vSphere” post. I absolutely do (more on that at the end…). I just find my needs align much closer with something like VMware Cloud on AWS and the consumption model it brings. Beyond that; I’m also more interested in being able to make my workloads portable through consumption based platforms. This might mean leveraging Docker, or scaling up with Kubernetes. It might mean dropping the workload into GCP, or Azure. I might want to play with how it looks behind an ALB in EC2.

Do I really want to manage my Puppet master on-premises, or would I rather consume it as a service in Public Cloud? Should I really keep my Harbor repository within my environment? Or would it be more beneficial to me sitting in EC2 where I can easily hit it from anywhere, without having to punch another reverse proxy into my lab?

Looking at my Lab Like an Enterprise

A lot of this realization came out of a debate I was having with Jon Schulman around some changes I was thinking of making in my lab. It’s an interesting “lifecycle” time for my “environment”. I’ve got a number of concerns that are coming up that are very easily paralleled to issues that customers are addressing today -

  • Heating/Cooling - Summer is coming. The temperatures in my area in California are already starting to heat up. Soon, those R710s are going start screaming in the garage. With that heat increase comes…
  • Power Usage - I spend roughly $70-$100 a month powering the systems in my garage. Cooling is a concern. I’ve wasted so much money trying various different cooling strategies. It never plays out well.
  • Lifecycle - I’m using 2 R710s and a Whitebox build (E5-2670; it’s safe for now). These R710s are dirt cheap, and totally capable systems - but I’m running out of road with CPU compatibility. ESXi already throws a fit during installs/upgrades. It’s time to upgrade these systems to an R720 at minimum. This impacts the above 2 issues directly, however. Switching to smaller systems that drink less power and require less cooling significantly increases my cost. It becomes prohibitive.
  • Storage - I’ve been careless with my storage. I didn’t plan fantastically long term; and I’m starting to bump up against resource constraints. I’d need to beef up my storage to continue.
  • Licensing Renewals - This one is a bit tongue in cheek. Several of my platform licenses are about to expire. NFR’s, perks of being in the “vendor” community. I’ll need to get these refreshed in order to keep living in the lab I’m accustomed to at home :)

I’ve tried to solve these issues all in various different ways. I remember last year I spent $300 moving all my systems into a smaller rack, putting it into my office closet, and trying to run ducting to vent the heat out. It was successful, but at the cost of really annoying sounds in the office. All. The. Time.

The next question that comes up is use case. What am I actually building and labbing on these days?

Understanding my Use Cases

I’m a fiend for learning, and I love tinkering. It used to be that I’d stand up platforms in my environment just to see how they worked. I think i’ve taken almost every product we have here at VMware through it’s paces in my lab. Beyond VMware technology, I’m always playing with platforms. A lot of times it’s simply for the feeling I get from “building” cool things. It gives me a level of peace.

My learning has become a lot more focused as of late on specific goals. I’m doing a lot more “real” coding vs generic scripting. I’ve really started to really enjoy building web applications with Angular and ClarityUI and am becoming a lot more focused on UI/UX. This has made me start to unpack digging into NodeJS more; forcing me to learn more around asychronous operations.

I’ve started toying with Golang quite a bit; structs and marshalling are still breaking my brain - but it’s getting better. Python is always something I’m working on. So much of this happening in containers on local machine, or Kubernetes in Cloud PKS (formerly VKE). The things that aren’t container workloads are getting spun up in EC2, Azure, GCP or running as Functions in FaaS. When I need to deploy vSphere workloads, I’m using Cloud Automation Services to tag a blueprint and drop a workload in VMC or another private vSphere.

Furthermore, the most popular topics I talk about in the community now aren’t “compute” related. People want to learn more effective ways to consume, automate, and operate “as code”. This is something that’s huge for me, because teaching people is one of the most rewarding things I’ve ever done. When was the last time someone asked me to teach them how to install vRealize Automation? It’s been a while.

When I look at the things I want to teach people about; its more platform/application driven. It’s how to play in the Kubernetes world, the “what is Docker” conversations. It’s how can I automate across environments and clouds. It’s Puppet and Ansible. It’s API’s. It’s User Interface/Experience. So much of this can be done from my local environment. In Technical Marketing I have a number of internal resources I can use that will give me full control of an environment (as mentioned before).

The more I look at my lab, the more I find that the things fulfill me - and the value I feel like I bring to our community lives outside of that environment in my garage. Maybe my world has just become a lot bigger than it was when I was trying to figure out how to get BGP to advertise simple routes in a small environment with my USG and NSX. That’s not to say those concepts aren’t valuable to me at all - I think i’m just on a different journey now than I was when that stuff was really important for me.

Closing, Mourning and Excitement

It’s really sad to me; because I love my lab. It’s taught me a lot; and I’m sure it’ll teach the next person a lot too. I talked a lot about identity in my last blog post. My homelab is a huge part of my identity. I even named my blog after the concept of a “Humble Lab”. What I’ve come to realize is “TheHumbleLab” isn’t about the servers it’s running. It’s about the workloads going onto it. It’s about the platforms I’m building. It’s the lessons I’m teaching in the community.

I want to be crystal clear. vSphere is not dying. Its best years are right around the corner - I was on calls today learning about some of the awesome stuff coming in future versions of vSphere/vCenter. If I’m honest with myself - it’s just that the trajectory I’m heading on isn’t in line with that environment living in my garage anymore. Maybe I’ll end up back there in the future - but for now - It’s time to migrate the workloads I need to save out - and power it down.

I’m a tinkerer by nature. I can absolutely see myself picking a few of the smaller ARM based systems to play with, just to satisfy that craving I’ve been super intrigued by some of these devices hitting the marketplace. I’ve had my eye on a few of the ODroid devices as something potential to pick up.

What’s Next

As I start to migrate my workloads out and into the cloud, I fully intend on documenting aspects of the journey. I’m excited to take products like HCX and migrate aspects of my Homelab into the VMC cluster that Technical Marketing has to use. I’m excited to see how I can use VMware Cloud Automation services to create blueprints that I can use when I want to spin up infrastructure for testing the next cool thing.

Most of all, I’m excited to share with all of you the journey! I know that many of you will probably frown on this post - because we love our homelabs! My only comment back is that my Homelab isn’t going anywhere - the definition of that lab is changing; and the home for the technology I’m working with is just distributed in a much different way than what my Garage can handle.

Plus - I won’t have to deal with those R710 fans screaming in 106 degree temperature this summer. Wife approval win.

Answering A valid question… “Would this be different if I didn’t work at VMware?”

One of my brothers from down under, Matt Alford asked me this valuable question tonight after reading this blog post - and I felt like it was a relevant enough answer to give on the blog, since I’m CERTAIN many people are having the same idea. “It’s easy for you to say this, when you have near unlimited resources to replace it with. Big whoop. You’re just moving data centers.” It’s a totally valid line of questioning. Let’s unpack it a bit.

My homelab is for learning. When I was a “customer”; I was building the things I would eventually build at work in my own environment first to learn on them. After that - I would expand, and learn. It was all so “new” to me that I just kept finding new and interesting things to play with. Fortunately, coming into VMware afforded me a lot more opportunities to learn. That being said, I’m certain I don’t even know half of “everything”. It just happens that on the journey, I developed new priorities, which led me down new hungers for knowledge. The things I wanted to learn evolved into something else.

When I look at the VMs that I’m planning to migrate into VMC; the list is fairly small.

  • Infoblox, for IPAM integration testing against various platforms
  • vRealize Automation 7.x (inclusive of the appliances, the Windows boxes, eww, and the SQL Server)
  • Windows Domain Controller, again, for testing
  • CAS Proxy Appliance

You can see a common theme here; these are all core parts of my current role and things that I need to be able to tinker with at times. If I wasn’t at VMware it’s hard to say if those items would be part of the equation still. I might need them still in my new role. Not having access to them anymore, I might suddenly want to tinker again, in which case I would instantiate some form of a new lab. That being said - I’d likely be more inclined to rent a server from a provider at this point - because the details around hosting in the garage have provided some complications that im not sure I want to deal with anymore. It would all depend on the situation I was in.

My lab environments will always be an extension of a combination of necessity and passion. Currently - the lab in my garage isn’t a necessity because the topics I’m digging the most into don’t need it. I have other means of supplementing the value it brings. With that gap filled; I’m free to invest “TheHumbleLab” in things more aligned with what my learning passion is.

TL;DR Version

If I wasn’t at VMware, or moved into a role where the need came back - I would fire back a lab in a heartbeat. I love labbing - and I’ll continue to lab in the way that makes the most sense for my current station. I’ll pivot as that changes and I discover new passions! I love vSphere, and believe that it’s the best at what it does. It’ll be there when I come back around the Sun again!