Update 4/26: I've revised the blog post to no longer connect to the appliance via SSH to deploy the VIC host; instead providing instructions for installing the Docker Toolkit on Windows 10 and leveraging it locally. This is the “right” way to do it. Using the appliance works; but is not recommended. In the original blog post; we SSH'd into the VIC appliance to perform the following tasks (deploy VIC Host, call Docker API, etc…). Originally I did this because it felt like the post “flowed” a bit better using the appliance we just deployed to do the work. Additionally, I was having issues getting Docker running correctly on my Windows 10 laptop. After chatting a bit internally, It's really not best practice to leverage the appliance this way. In the lab this is fine; but lets coach to success here :)
I Can't Even Contain Myself…ha…ha…
Now that I've gotten a poor joke out of the way, lets jump into something pretty exciting. VMware's vSphere Integrated Containers version 1.1 went GA on 4/18 (see blog post here). Containers are growing quickly in popularity, and are starting to see some production usecases. I've been playing with them a little bit here and there in the lab; but never with any sort of major goals in mind. That being said - it's time to step the game up.
We are going to go on a magic carpet ride and deploy vSphere Integrated Containers in the Homelab today (well, tonight, because its the only time I can do nerd things…#parentproblems).
How are vSphere Integrated Containers (VIC) different from Docker?
(hint: They aren't, really)
With a traditional Docker configuration, you have a system (physical or virtual) with the Docker binaries installed. You run your docker commands from the command-line and access it directly. People run docker on workstations, on physical servers, on virtual machines, all over the place. Containers are portable, so generally, moving them between Docker installations is pretty straight forward and easy. This is one of the big things that makes developing with containers so attractive. Develop on your desktop, push to registry, pull down, voila.
Thinking about the Docker “host” for a second, this is generally a Linux box of some flavor sitting in your datacenter (Yes, i know Windows can do Containers too now. Speaking in generalities…). This box needs to be managed by administrators, patched, monitored, all that typical “stuff”. This host can have hundreds of containers running on it potentially. If this is a VM, now you have VM hosting many different types of “services” underneath it. This brings up all sorts of architecture woes that will need to be addressed.
vSphere Integrated Containers allows you to “remove” the need of having a dedicated container host VM and instead allows you to leverage your ESXi instance as a container host directly. Once deploying your VIC host, you are presented with a simple Docker API gateway to run your Docker commands against. Containers are provisioned and treated as actual VMs by the infrastructure. The allows fun things like NSX and vRealize Operations to identify the system and do their magic sauce against them. It allows us to easily integrate containers into vRealize Automation. It opens up integration with vSAN; not to mention the wide range of administrative capabilities native to vCenter. It gives your developers a direct API endpoint to work against remotely; allowing you maintain some semblance of administrative control over your environment while still granting your developer community the agility they need to do their jobs.
Lets get started!
Prerequisites
We're going to need a couple things to get rolling in this example
- Docker Toolbox for Windows
- VIC bridge portgroup created
Since we're using a Windows 10 system, we're going to need to install the Docker Toolkit. Head over to Docker and download the installer. Run said installer, and move through the screens. When you reach the screen to select the components to install, you can either choose to keep the full docker environment including the VirtualBox Installation or uncheck it. I set mine as the following -
You can launch a command prompt and run “docker” to confirm it is actually installed.
Once that's done, we'll switch over to PowerCLI and add a new VDS port group called “vic-bridge” to act as the Container bridge network. We use a simple command like below to do this (I only have 1 VDS in my environment, so the command is easy)
$creds = get-credential
connect-viserver -name hlsite1vc01.humblelab.com -credential $creds
$vds = get-vdswitch
new-vdportgroup -name vic-bridge -vdswitch $vds
Again, you should see the following result after the command is run
With these items complete; we're ready to move forward!
Deploying vSphere Integrated Containers 1.1 OVA
Starting with version 1.1, the initial deployment of VIC is handled through an OVA. This OVA has a number of functions -
- Hosts the plugin installation scripts for VIC
- Hosts the current GA of the actual VIC Engine Binaries
- Hosts VMware's Container Management interface (Admiral Product, also a part of vRealize Automation)
- Hosts VMware's enterprise grade container registry (Harbor Product)
We'll navigate to the VIC homepage to get started (here), and follow the breadcrumbs to the downloader. It'll require you to sign in to download, but it doesn't cost you anything. Sign up, pull down the OVA.
With the recent changes to PowerCLI joining the Powershell Gallery, I saw this as a great opportunity to remove the legacy PowerCLI and deploy the OVA from Powershell like a boss. We're going to use the Import-vApp cmdlet to handle this, and I've posted the snippet I used below -
connect-viserver -name 'hlsite1vc01.humblelab.com'
$ova = "C:\users\codyd\downloads\vic-v1.1.0-bf760ea2.ova"
$ovacfg = Get-OvfConfiguration $ova
$pass = 'Pass@word1!'
$vmhost = Get-VMHost
$ds = get-datastore -name 'hl-site1-ds01'
$ovacfg.appliance.root_pwd.value = $pass
$ovacfg.appliance.permit_root_login.value = $true
$ovacfg.IpAssignment.IpProtocol.value = 'IPv4'
$ovacfg.management_portal.deploy.value = $false
$ovacfg.network.ip0.value = '192.168.1.30'
$ovacfg.network.DNS.value = '10.0.0.5'
$ovacfg.network.fqdn.value = 'vic.humblelab.com'
$ovacfg.network.gateway.value = '192.168.1.1'
$ovacfg.network.netmask0.value = '255.255.255.0'
$ovacfg.network.searchpath.value = 'humblelab.com'
$ovacfg.NetworkMapping.Network.value = 'LAN'
$ovacfg.registry.deploy.value = $false
$ovacfg.registry.admin_password.value = $pass
$ovacfg.registry.db_password.value = $pass
Import-VApp -Source $ova -OvfConfiguration $ovacfg -Name 'vic' -VMHost $vmhost -Datastore $ds -DiskStorageFormat Thin
get-vm -name -vic | Start-VM
Lets talk a little bit about whats going on here
- Connect to vCenter
- Define out the OVA variables (where is it, what host, getting the ova properties, getting the storage, etc…)
- Define out all the required ova properties
- Import the vApp
- Start the VM
If all is successful; you should see a result similar to the below -
Give the appliance a bit to boot, once started, we'll navigate to _https://vic_appliance_address:9443 _to verify that all is well. Assuming we didn't wreck the world with our simple OVA deployment, you should see something like…
We're going to download the vic_1.1.0.tar.gz file, to a directory of your choice. I recommend creating a directory specifically for VIC components, I'm using C:\VIC. Once downloaded, use any of a hundred different tools to untar it (I use 7zip Portable) into that directory; you should see something similar to the following
If we launch a command prompt from this window (Shift + Right Click > Open command window here) and run vic-machine-windows.exe –help, we can see the executable is working as expected.
Opening ESXi Firewall for VIC Communication
We need to create firewall rules to allow the communication VIC requires to function. The VIC team has made this super easy, and included a switch to configure the rules automatically using the vic-machine-platform command.
We'll use vic-machine-windows with the “update firewall” directive to make all of the firewall changes. I issue the following command, and voila - firewalls are updated:
Note: You'll need your vCenters certificate thumbprint. The first time you issue the command it will likely fail. In the failure message you'll see the the thumbprint. Run the command again, appending the thumbprint to the end and you should be good.
vic-machine-windows update firewall --target hlsite1vc01.humblelab.com --user administrator@vsphere.local --password L1k3Th1s1sR3@11y1T! --allow --thumbprint=4C:C5:34:7E:B6:2E:98:7F:7E:46:B1:C0:D9:BD:14:ED:A2:A0:BA:E9
You should be greeted with a screenshot similar to the one below -
Creating our VIC Host
Now that we have our environment totally staged, lets deploy our first VIC host!
In my lab there were a few things I wanted to do
- Deploy to a static address
- Use a different bridge port group range since a portion of my lab lives on 172.16.0.0; the default bridge port group range
Fortunately, the “create” command is pretty flexible in these offerings. We run the following….
vic-machine-windows create ^
--target hlsite1vc01.humblelab.com ^
--user administrator@vsphere.local ^
--password L1k3Th1s1sR3@11y1T! ^
--name HL-VCH01 ^
--compute-resource Tenant-S1 ^
--image-store hl-site1-ds01 ^
--bridge-network vic-bridge ^
--bridge-network-range 192.168.100.0/16 ^
--public-network LAN ^
--public-network-ip 192.168.1.31/24 ^
--public-network-gateway 192.168.1.1 ^
--dns-server 10.0.0.5 ^
--no-tlsverify ^
--thumbprint=4C:C5:34:7E:B6:2E:98:7F:7E:46:B1:C0:D9:BD:14:ED:A2:A0:BA:E9
Now, there are a couple of big callouts in this command that I want to spend some time on…
- I apply a static range to the bridge network using the –bridge-network-range switch, otherwise get ugly API communication failures since part of my lab is on a “bad” subnet to use for labbing
- I need the vCenter's certificate thumbprint again for runnin the command
- If we get stuck, we can use “vic-machine-windows create –help” to get us through some of the complicated parts
Now, once this command starts running - it's going to build us our VIC host. If all went well, you should see the following screen after a few minutes -
So now, we have our VIC host - and using the following information, we can connect directly to the API -
Docker environment variables:
DOCKER_HOST=192.168.1.31:2376
Environment saved in hlvch01/hlvch01.env
Connect to docker:
docker -H 192.168.1.31:2376 --tls info
From the Command Prompt or PowerShell, if we issue the “docker -H 192.168.1.31:2376 –tls info” command, you can see we are presented with our Docker API endpoint!
Running our First Container
It's all down hill from here! We have our VIC host running. We have our API to connect to and issue our “run” commands. Let's give it a simple test spin!
docker -H 192.168.1.31:2376 --tls run -it ubuntu
This will tell our VIC to pull down the docker container, and drop us into it in interactive mode so we can actually run commands. Our Docker API pulls down the various OS layers for the ubuntu VM.
If we exit, and do a “docker –tls ps -a”, we can see the container still exists (although it is shutdown, since we just exited it)
And if we look in vCenter, we can see the actual container represented as VM under the VIC host we created much earlier!
We can also restart the container, and attach to it through the API (along with all of the other typical Docker commands!)
Conclusion
This was a lot of words to get us somewhere pretty easily. It's extremely easy to get started leveraging containers for your vSphere environment using the VIC platform. Furthermore, it allows you to leverage your already existing environment to begin consuming the next platform of computing.
What's next?
This post was a foundation post, the start of a multi part series I want to do on containers in the VMware suite of products. As I move forward, look forward to some of the following topics -
- Integration with Harbor and Admiral
- Integration with vRealize Automation Blueprints
- How to leverage NSX with Containers, and Container specific networks
- Working with Photon Platform, Kubernetes, and other “Cloud Native” platform
As always, thanks for coming by. I hope you learned something! It's now 2:00am, and I'm going to bed!