Fargate in AWS ECS with Terraform

This post describes how to provision a container in AWS ECS from Terraform. The container’s image is fetched from docker hub. For the demonstration purposes, the nginx image will be used.

ECS service launch type described here is Fargate. This type gives simplicity.

The environment for provisioning with Terraform is a Docker container. More on that here. In order for this to work, AWS user credentials have to be generated as mentioned in the Administration section.

Administration

Create a user in AWS IAM and create access key for the user. Store ACCESS KEY and SECRET ACCESS KEY somewhere since they will be used in Terraform.

Add the following policies to the user:

  • AmazonVPCFullAccess
  • AmazonECS_FullAccess

You can fine-tune the policies as you wish, for the demo purpose this should be acceptable.

VPC

Preparing the VPC and security is a must, so the minimum in order to have the container running is described here.

This Terraform file creates a VPC, Internet Gateway, Route, Subnet and a Security Group which are alle needed to reach to the published container from the outside world. Fine-tuning of the VPC services is ignored for simplicity sake. Port 80 is opened to the world to be able to test the container.

ECS

Once the VPC is in place, the rest is quite simple. The ecs.tf shows how to get everything working.

Create cluster

Create the ECS cluster. This is launch type independent.

resource "aws_ecs_cluster" "ping" {
  name = "ping"

  setting {
    name  = "containerInsights"
    value = "enabled"
  }
}

Define task

Define how the container should look like: the resources needed, container image, ports,…

resource "aws_ecs_task_definition" "task" {
  family                        = "service"
  network_mode                  = "awsvpc"
  requires_compatibilities      = ["FARGATE", "EC2"]
  cpu                           = 512
  memory                        = 2048
  container_definitions         = jsonencode([
    {
      name      = "nginx-app"
      image     = "nginx:latest"
      cpu       = 512
      memory    = 2048
      essential = true  # if true and if fails, all other containers fail. Must have at least one essential
      portMappings = [
        {
          containerPort = 80
          hostPort      = 80
        }
      ]
    }
  ])
}

Argument container_definitions can also use Terraform function file. This makes the code easier to read. Here is an example of a Terraform file using the function, and here is the JSON file the function uses as the argument.

Service

Now we can finally deploy the service – create the container and use it

resource "aws_ecs_service" "service" {
  name              = "service"
  cluster           = aws_ecs_cluster.ping.id
  task_definition   = aws_ecs_task_definition.task.id
  desired_count     = 1
  launch_type       = "FARGATE"
  platform_version  = "LATEST"

  network_configuration {
    assign_public_ip  = true
    security_groups   = [aws_security_group.sg.id]
    subnets           = [aws_subnet.subnet.id]
  }
  lifecycle {
    ignore_changes = [task_definition]
  }
}

The service is attached to a specific cluster and specific task definition. The launch type is FARGATE. Public IP will be assigned and the service will be in a specific subnet and secured by a specific security group.

Once all is provisioned we can check the result:

Go into AWS Console and find service ECS. Make sure you are in the right region. Click on clusters and find the cluster and click on it. Under tasks you should se the provisioned container, something similar to this:

Clicking on the task ID should give you task details. Under Network is the public IP. Copy it and visit it. Nginx should welcome you.

Docker, AWS, Python3 and boto3

Docker, AWS, Python3 and boto3

The idea behind is to have an independent environment to integrate Amazon Web Services’ objects and services with Python applications.

The GitHub repository with example can be found here. The README.md will probably serve you better than this blog post if you just want to get started.

The environment is offered in a form of a Docker container, which I am running on Windows 10. The above repository has a DockerFile available so the container can be build wherever.

Python 3 is the language of choice to work against the AWS and for that library boto3 is needed. This is an AWS SDK for Python and it is used to integrate Python applications with AWS services.

Bare minimum

To get started, all is needed is access key and secret key (which requires an IAM user with assigned policies), Python and installed boto3.

The policies the user gets assigned are going to reflect in the Python code. It can be frustrating at the beginning to assign the right policies so maybe for the purpose of testing, give the user all rights to a service and narrow it down later.

Where to begin

The best service to begin with is object data storage AWS S3 where you can manipulate with buckets (folders) and objects (files). And you also see immediate results in AWS console. Costs are also minimal and there are no services running “under” S3 that need attention first. My repository has a simple Python package which lists all available buckets.

Credentials and sessions

To integrate Python application and AWS services, an IAM user is needed and users access key and service key. They can be provided in different ways, in this case, I have used sessions – which allow users (dev, test, prod…) to change at runtime. This example of credentials file with sessions gives the general idea about how to create multiple sessions.

The Python test file shows how to initialize a session.

Exception handling

Handling exceptions in Python3 and with boto3 is demonstrated in the test package. Note that the excpetion being caught is a boto3 exception.

Further work

The environment is set up, PyCharm can be used for software development while Docker can execute the tests.

There is nothing stopping you from developing a Python application.

After gaining some confidence, it would be smart to check the policies and create policies that allow a user or group excatly what they need to be allowed.

Dilemma

How far will boto3 take one organization? Is it smart to consider using, for example, Terraform when building VPC and launching EC2 instances?

It is worth making that decision and use an Infrastructure-as-Code tool on a higher level to automate faster. And prehaps use boto3 to do more granular work like manipulating objects in S3 or dealinh with users and policies.