I do a TON in my homelab. It’s funny; I’ve moved into a role where I have a ton of “company” resources that I can work with - On-Demand resources, a VMC cluster, 2 on-premises datacenters - but I still make a point to keep my homelab up and running. It gives me a place to “play”. In a weird sort of way; it’s a calming place for me.
Every week I knock out a few things, some weeks more things than others. I thought it would be a good idea to start documenting the things I do in the lab on a week-to-week basis. If anything it might generate some conversation!
This week was a fun week in the lab! I learned a bunch of new stuff! Major highlights include finally understanding NSX-T, deploying Harbor, and refactoring the demo application I’ve been working on for our Technical Marketing tea, the leverage the private harbor repository instead. Lets jump in!
NSX-T - Expanded to “Real Cluster”, Upgrade to 2.3, And a Romantic Evening With Burkey
I learned the majority of my NSX-V knwoledge during my time at Pacific Gas & Electric. I was fortunate to have an incredible networking lead who spent a lot of time teaching me the in’s and out’s of “regular” networking; and a great set of Professional Services consultants who taught me NSX-V. I feel pretty solid/confident in it. And then NSX-T happened… For some reason, I just had a really hard time wrapping my head around the concepts in NSX-T. I managed to “hack and slash” my way into getting it working on a nested cluster thanks to the guides from Sam McGeown and William Lam - but I can’t say I fully “got it”. It’s kind of silly now when I look back at it - because after a bit of time with someone who really knew it - things aren’t THAT different, at least on the surface.
I had a power issue in my environment that sent my nested hosts sideways, and I decided to take that opportunity to rip out NSX-V from my lab and dive into NSX-T. Narrator: It didn’t go well.
I reached out to Anthony Burke who knows a thing or 452 about NSX-T. Well, I reached out at 12am…and we may or may not have been on until 2am - this was cataloged extensively on Twitter. All jokes aside, Burkey is brilliant. 2 hours with him hitting him with a variety of questions really filled in the blanks for me.
When it was all said and done, I had a mostly functional 2.2 cluster. I was having some interesting issues with edges and transport nodes going down.
I experienced a few shaky bugs on 2.2, so it seemed like a great time to upgrade to 2.3. I upgraded; but then thought - it’s a better idea to rip and replace. Let’s make sure I actually understand the concepts on my own.
So today (Sunday) I ripped the whole environment out and started from scratch. Successfully up and running on NSX-T 2.3 now! Successfully have it binded to a Cloud Assembly endpoint…
And was able to leverage it to deploy workloads both on an NSX-T logical switch as well as leveraging an NSX-T Load Balancer!
When we look in the NSX-T manager (such a better UI than V…) we can see the Load Balancer deployed successfully.
WIth this fully up and functional, I killed off the nested hosts I had running. I’ll likely bring some nested environments back soon on a dedicated host - I just needed the memory on my “core” environment for other fun and games.
Major Call-Outs From Late Nights With Burkey
- Differences with where to tag Uplink and Overlay networks; whether it be on the Portgroup vs Profile, etc…
- Specific BGP configurations based on what we’re attempting to redistribute
- Importance of upgrading to NSX-T 2.3
- Deeper understanding of how various profiles work
- Troubleshooting TEP/Edge connectivity via CLI
Harbor Deployed as a Container Repository
I’ve been doing an increasing amount of toying with various Kubernetes/Docker usecases. Specifically around creating demo workflows usage with Code Stream in Cloud Automation Services. I’ve kept a close eye on VMwares Harbor for quite a while, and have deployed it a few times - but usually only as a quick “play” before moving on. What is Harbor you say? Harbor is an Open Source container repository, used to host docker images. It’s fully featured - including private repositories, vulnerability scanning, LDAP integration and much more.
I decided this week to get it setup as a fully functional repository for my environment and for my future projects. Furthermore, I set it up against my Synology NAS Application Portal to have it reverse proxied into my environment. This lets me leverage it externally against my public facing projects as well as personal projects!
- Super easy to get started with Harbor; A single harbor.cfg file to edit, and an easy install script fires off the build. Using the switch –with-clair enables the vulnerability scanning capabilities which are absolutely mandatory IMO
- The Synology Application Portal continues to get a ton of use in my lab for reverse proxying access to things in my environment (Gitlab, Jira, Confluence, Harbor, Plex stuff)
- Another call-out for the Synology - easily being able to request SSL certificates via LetsEncrypt is pretty awesome. Makes it a flash to get that pretty green “everything’s ok” text
- I don’t have LDAP setup yet - but that’s purely because of time constraints. I’ll be getting that setup this week, so that you trustworthy souls on the interweb don’t find my instance (not that hard…) and brute force haxxor me.
3-Tier Container App Refactoring
This one wasn’t too crazy; but for the sake of “journaling” I’m going to include it. I’ve been working on a 3-Tier microservices application that I’m going to start using for demos. It’s an Angular frontend (on Nginx), pointed at a Flask API tier, backed by a Postgres DB tier - all in containers. I’ve been pushing the individual tiers to Dockerhub historically - but now that my Harbor instance is up and running, and is externally accessible - I’ve moved it all onto there.
For those of you that haven’t seen it - I’ve included it below.
Typically I run it in “Dark Mode” - but I was playing with some controls around switching between the 2 ClarityUI themes. It’s really nothing that special - it’s very much a “Guestbook” style app where when you post, everyone see’s what you entered. I’m working on implementing websockets on the application so it’s not constantly polling. I plan on “open sourcing” this demo once the websockets are functional to share with the whooooole world!
As far as the actual refactoring goes; this involved changes to the built in Kubernetes and Docker-Compose files that I’ve got in the repo, as well as updating my Code Stream pipelines to support the new repository instead.
Other Notable But Not That Relevant Call-Out’s
- Got Lightbox functionality working on the blog; when you click picture son my newer blog posts, it presents a bigger version of the image with the background darkened. Still a few bugs; but not bad for a “from scratch” implementation. I love me some Hugo
- Published a What’s New in vRealize Automation 7.5 Blog Post on VMware Main (little bit longer than a week ago, but still..)
- Published a Technical Overview of VMware’s Cloud Automation Service - Cloud Assembly
- Updated my lab instance of vRealize Automation from 7.4 to 7.5
- Updated vRealize Operations to 7.0 GA Build
- Updated Log Insight to 4.7
- Lifecycle Manager Updated to 2.0
See something you want more details around? Something you think would make a good blog post? Drop me a Tweet @CodyDeArkland and let me know!