I’ve always wanted this blog to be something that was from a “Customer” point of view – instead of being from a VAR, MSP, or the Vendor directly. As customers we have a unique perspective on how to make these products fit with the goals we have as an enterprise (reads as fitting square pegs into round holes). I wanted to build my lab to show how I’ve made that work as close as possible. The downside was that I started homelabbing long before I was working within the “Private Cloud” space. For that reason, my environment was fairly hodge-podge and not really uniformly configured. I did a lot of common “stuff to make stuff work”.
Standard disclosure – my thoughts are my own. As I mentioned, I’m a customer. The discussions I have on this blog, are my own. My company does not endorse them, and nothing I say on here should be taken to imply that I’m speaking for my company in any way. This is a personal blog, about technology that I work with. Key word is personal.
I decided that it’s best to completely greenfield my environment, blow away my existing configurations and start everything new. As of now, armed with my VMUG Advantage Subscription, I’ve got vSphere 6.0 installed on all hosts, my initial VLANs all carved out, vCenter 6.0 and vCenter 6.0 installed for management. I’ve got a long way to go however. The ultimate goal is to make an environment that closely mirrors an enterprise environment. This is both for my own experimentation and learning, as well as sharing the knowledge I’ve picked up working in an enterprise environment with EMC’s FEHC stack.
Without anything further – Here’s the initial design plan!
Purchased and implemented components:
- Edgerouter Lite
- Edgeswitch Lite 24
- 2x Dell R710 (2x L5630/72gb RAM
- 1x HP DL380 G6 (2x L5630/72gb RAM
- 1x HP DL380 G6 (Spare, same spec’s)
- 4x LSI-9240-8i crossflashed to IT 9211-8i IT mode
- Eaton 5px 3000va UPS (with newly installed L5-30r on a dedicated breaker)
- APC Server Rack
Lets talk about a few of my product choices:
I went from using a Virtual PFSense Router to using an Edgerouter Lite. I love PFSense. It’s a great platform. I have no real beefs with it. That being said – I wanted something more “enterprise feature set grade”. Now, I know that many people use PFSense in enterprises, I’m not lobbing grenades at PFSense as a platform. I liked the Edgerouter Lite design, I liked the idea of moving directly to hardware instead of having a VM, I loved the price point. OSPF and other routing protocols was awesome (yes I know PFSense has an awesome selection of packages). Also – I do my homelab to learn. Using the Edgerouter gave me something else to learn.
I was using a Cisco SG300-10. Awesome switch, absolutely loved it. With that said – I needed A LOT more ports. I considered just buying a second 10 port Cisco SG300, to give me switch redundancy, but there was something attractive to me about keeping my router and switch from the same company. Consistency and such. My OCD liked it. Plus the Edgeswitch fit perfect in my rack.
R710s are a staple of homelabs these days. They are cheap, powerful, and content for fixing them when stuff goes wrong is pretty available. They aren’t terribly power hungry either. I had the DL380 G6’s already, so I decided to power 1 down and use it as a spare, and bring the other one into the cluster. Sadly VMUG only supports 6 sockets, and I didn’t want to go down “other paths” for licensing.
I decided to swap out the crappy onboard raid cards with something tried and true from the community. I got a great deal on 4 IBM 9240-8i’s so grabbed those, crossflashed to 9211-8i, and voila – had better storage availability.
The BIGGEST upgrade to my environment wasn’t even server hardware. A local government agency was cleaning out the datacenter and gave me, for free, an APC Server Cage with an Eaton UPS, which had a brand new battery in it! Way to class up the homelab! Gotta love freebies.
So What Now?
Now, I start building out the infrastructure. VMware’s NSX is absolutely critical to the implementation. I also need a storage platform capable of leveraging CoprHD (Vipr in FEHC; but CoprHD will suffice). In deployment order, I’m planning the following as phase 1:
- Implement NSX Software Defined Datacenter Platform – Use version 6.1.5 to demonstrate the process for upgrading to 6.2.1
- Implement ScaleIO Software Defined Storage Platform (free for home use, supports CoprHD
- Implement vRealize 6.2.1 and 7.0 on an NSX Virtual Wire’s
- Implement vRealize Operations Manager (huge gap for me)
The plan is to document out this journey and show the pitfalls I run into, and real solutions for them.
Stay tuned for the next few weeks! There is going to be some pretty awesome content flowing down!