Getting Started With AWS Elastic Container Service (ECS) and Fargate

Getting Started With AWS Elastic Container Service (ECS) and Fargate

· Read in about 9 min · (1781 words) ·

I mentioned previously the idea of moving a number of my lab components out of the physical Homelab and into other environments. As I shift to a more consumption driven model, I don’t want to have to worry about “how” the infrastructure is built. I want to declare what I consume, and let the platform manage it as needed. In this post I want to focus on offloading my container workloads into AWS Elastic Container Service and Fargate. In the future, I’m excited to check out how EKS works with Fargate as well; but for now - let’s start simple!

Containers vs Kubernetes (lolwut?)

I get this question often… “Do you go with a container strategy or a Kubernetes strategy?”. It’s important as we get started that we understand the terms.

A container by definition is a grouping of all software, code, and components required to run a common application in a single “unit” (it’s much easier to juts say, “It’s a container!”). By far the most popular containers right now are Docker based containers; but there are many other flavors of containers out in the wild. In the Docker world, which I’m going to focus on today, we build these containers by leveraging a Dockerfile.

You can think of a Dockerfile is a recipe, or a collection of required components and ordering to be executed in order to build that single unit of software.

Kubernetes on the other hand is a scheduler for containerized workloads. Kubernetes manages the lifecycle, state, networking, and access to kubernetes pods in an infrastructure as /declarative nature. These pods are most typically Docker containers. Kubernetes understands how to recycle problematic/unhealthy pods, as well as how to control access across multiple internal (to Kubernetes) networks. It provides a highly resilient API and is extensible leveraging a number of different plugins and overlays. It really automates the consumption of the containerized application layer across the typical constructs (compute, storage, networking).

So the conversation is less about Containers vs Kubernetes, and more about what type of Container strategy are you executing.

In my case, I have uses for both! In this article, we’re going to be focused on the Docker image use case leveraging AWS Elastic Container Service (ECS) and Fargate.

How Does ECS + Fargate Work

I found a really good diagram within the Fargate website that summarizes things far better than I ever could…

The key takeaways are that Fargate fits my use case of creating a consumable platform that requires minimal time building the supporting infrastructure. I could stand up EC2 instances and add them to an ECS cluster for consumption - but then ultimately I’m still managing those EC2 instances (Well… as much as one would manage an EC2 instance - not like its a lot - but I’m a man of principal). Fargate allows me to establish a control plane that it manages the underlying cluster resources through on it’s own. Fargate will expand as needed to support the inbound requests and scale requirements.

Fargate is going to cost a little bit more than doing this all in an EC2 instance would cost - but it’s worth it for the reduction in overhead as well as the ability to scale up my deployments as needed wit minimal effort. Finally and arguably most important - going the Fargate route will teach me something new; which I always put a high value on!

Let’s Take It For a Spin

First we’ll head into AWS and hit our Services drop down to select ECS. One thing I appreciate about AWS search is that it’s somewhat context aware. There isn’t a direct “Fargate” service, but if I type it in - ECS (Elastic Container Service) Resolves

We’ll select ECS and move forward. As a new user, we’ll drop right onto a landing page that talks about ECS. We’ll click “Get Started”.

Inside of ECS, there are a few configuration items that we need to do. The getting started guide walks us through all of these steps.

  • Define a Container and our Task
  • Create a Service
  • Configure out Cluster (Fargate, in our case)

By default the container definition gives you some rough parameters that can be used. In my case, I’m going to be deploying a custom container that runs an Angular application. I’ll select the “Custom option, and configure it.

There’s a ton of options that come up when you hit “Configure”. I’m going to cover only a few of theme here, I highly recommend you dig into many of these settings on your own!

  • Image - I’m going to use a docker image that I wrote hosted in my VMware Harbor repository. In this case I use the Harbor URL.
  • Private Repository - Not applicable in this case; I’m using a public repository
  • Memory Limits - I’m going to keep this low since I want to keep cost down, and I know my application is light weight.
  • Port Mappings - My container is running on port 80; so I’m going to map port 80 inbound.

Note on the bottom the “Advanced Container Configuration”. This allows you to expose a number of other advanced configurations for your container. Things like health checks, and environment configurations. Something extremely important is that this is where you can configure environment variables into your containers environments. In my case, I add in my AWS keys as environment variables so my application will “pull back” AWS content as this application specifically interacts with AWS. Pay close attention to this space as a lot of the configuration items here could be important (i.e. Networking!)

The next section is our Task Definition section. You can think of a task definition of the execution parameters of a Container definition. This area holds the name of the Task Definition, the Networking style that is used (awsvpc in this case), the Task Execution Role (roles in AWS), by default we’re using Fargate for our Compatibilities (because we want to expose the Management plane aspect). We also configure the total task Memory and CPU available. These are important for our cost management.

On the next screen, we are configuring our service exposure for this set of tasks. If we want to run multiple replicas of the same container - we would define more tasks. In my case i’ll stick with 1 but if I wanted to run multiple, I could leverage an application load balancer to load balance those containers. In reality I could use an ALB with even just one application to get some better application level context - but not needed at this point.

On our next screen, we’ll configure out actual cluster settings. Fairly straight forward with our name, the VPC we want to use, and the subnet within that VPC. I’m universally “creating new” in this case.

The final screen of our configuration is an overall summary of the settings for this deployment. Hit the Create button on the bottom and your cluster will start to spin up!

The resulting screen shows us all of the tasks being executed against this cluster currently. Our Fargate cluster is currently creating. This namely includes the necessary ECS components - Security Groups, VPC’s, Subnets, as well as the Fargate management plane. We can actually watch this creating by observing a few of the key components of AWS. For example, if we head to our VPCs, we can see the ECS created a new VPC for us automatically with a 10.0.0.0/16 subnet…

Likewise, we could go see security groups and other components our ECS/Fargate environment that have been created along the way. What we don’t see is any EC2 hosts; because Fargate is managing this capability behind the scenes.

When we go back into ECS, we can see our default cluster has been created, with 1 service and 1 task running.

If we click into the cluster, we can browse each of the tabs to get important information about our cluster. Navigate to the Tasks tab and click on our Task. This shows us the current running state of our task including the connectivity details (both the internal IP and external IP), as well as the container name, and the specific image it’s running. If we expand our container (custom in this case) we can see all of the advanced configurations we put in place previously

If I launch the public IP, I can see my application is actually running.

I could take the public IP and bind it to an entry in Route53 to give it a more friendly DNS entry if I wanted to; but more on that another time! Returning back into ECS, I can select the logs tab to see direct standard output (STDOUT) from the container that I would normally see when running a docker logs command locally on a host.

Updating a Task

In order to update a running task, I can navigate into Task Definitions on the left, and select my Task Definition. Inside this menu i can see all currently defined definitions of a task. I can check the box, and select “Create a New Revision” to load my new version of the definition. If I’ve used the latest tag on my docker image, it will pull down my most recent container push.

Within this menu, I can change all of the original settings I configured against my cluster. Once I apply these settings and hit create, my version is automatically upped to version “4”.

From here, I can select the Actions drop down within the Task Definition and select Run Task. This will bring up a screen where can chose how we want the Task Definition to execute. In my case, I’m going to use my existing VPC again, and one of the ECS created subnets.

After selecting Run Task we’ll be magically transported back to the Clusters view where we can see our new task provisioning.

In a few moments, our second task will be running. We can see our new IP address for this task. The existing task can be stopped and removed. We have successfully performed a lifecycle operation against our previous image - replacing with a new version! And with that, we wrap up our first steps into Fargate

…for now!

Wrapping Up

I dig ECS and Fargate because it truly focuses on giving the end user an experience focused on consuming Docker as a service with minimal effort. It was very straight-forward to get started using - and provided quick value. I’m certain I’m only scratching the surface. Next time we revisit this topic we’re going to do it from the CLI and look at how to provision docker-compose files onto ECS! Stay tuned!