Azure development environment in Docker

As a Windows user I find it difficult to work with Kubernetes in Powershell. So I have build a Docker image to move my execution to Linux, while the coding remains in Windows’ IDE.

The image described in this post has Azure Cli, kubelet and helm installed.

A Service Principal is used to authenticate from the container. Here is a post that explains how this is done.

Build the image

The Dockerfile looks like this:

FROM ubuntu:latest

RUN apt update && \
    apt upgrade -y && \
    apt install curl libicu-dev -y

# azure cli
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash && \
    az upgrade --yes

# kubectl
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && \
    install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# helm
RUN curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

RUN cat >/login.sh <<EOL
#!/bin/bash
az login --service-principal -u \$AZURE_SERVICE_PRINCIPAL_ID -p \$AZURE_KEY_PATH --tenant \$AZURE_TENANT
EOL

# remove \r and make it executable
RUN sed -i 's/\r$//g' login.sh && \
    chmod 711 login.sh

Installing kubectl and helm is needed only for work with Kubernetes and AKS. Even though the base image is light – 70MB – the image rounds up to 1GB, thanks to 700MB that Azure Cli brings to the table.

The login.sh script that is created is used to login to Azure from the container.

Now the image can be build from the folder where Dockerfile is saved:

docker build . --tag=markokole/azure-dev-env

Create the container and execute it

Environment file envs holds values for variables AZURE_SERVICE_PRINCIPAL_ID, AZURE_TENANT and AZURE_KEY_PATH.

Value for AZURE_KEY_PATH has to be equal to the path where the key is mapped to in the container.

AZURE_SERVICE_PRINCIPAL_ID=
AZURE_TENANT=
AZURE_KEY_PATH=/.azure/key

Run the container:

 docker run -itd --rm --name azure-local `
                      --env-file envs `                      
                      -v C:\marko\git\azure\bicep:/local-git `
                      -v C:\marko\keys\developerCli.pem:/.azure/key:ro `
                      markokole/azure-dev-env

Two volumes are mapped into the container:

  • work folder to allow writing scripts in IDEs in Windows and executing them from container.
  • Key and certificate (in one file) are mapped to a file in the container in order to be able to login to Azure.

The container is entered with the following command:

docker exec -it azure-local /bin/bash

And once in the container, the login script can be executed.

./login.sh

The output, if login is successful, is a JSON object as described in the post about logging to Azure services with service principal.

Make image available on docker hub

Login to docker and push the image if anyone is interested in using it.

docker image push markokole/azure-dev-env

Fargate in AWS ECS with Terraform

This post describes how to provision a container in AWS ECS from Terraform. The container’s image is fetched from docker hub. For the demonstration purposes, the nginx image will be used.

ECS service launch type described here is Fargate. This type gives simplicity.

The environment for provisioning with Terraform is a Docker container. More on that here. In order for this to work, AWS user credentials have to be generated as mentioned in the Administration section.

Administration

Create a user in AWS IAM and create access key for the user. Store ACCESS KEY and SECRET ACCESS KEY somewhere since they will be used in Terraform.

Add the following policies to the user:

  • AmazonVPCFullAccess
  • AmazonECS_FullAccess

You can fine-tune the policies as you wish, for the demo purpose this should be acceptable.

VPC

Preparing the VPC and security is a must, so the minimum in order to have the container running is described here.

This Terraform file creates a VPC, Internet Gateway, Route, Subnet and a Security Group which are alle needed to reach to the published container from the outside world. Fine-tuning of the VPC services is ignored for simplicity sake. Port 80 is opened to the world to be able to test the container.

ECS

Once the VPC is in place, the rest is quite simple. The ecs.tf shows how to get everything working.

Create cluster

Create the ECS cluster. This is launch type independent.

resource "aws_ecs_cluster" "ping" {
  name = "ping"

  setting {
    name  = "containerInsights"
    value = "enabled"
  }
}

Define task

Define how the container should look like: the resources needed, container image, ports,…

resource "aws_ecs_task_definition" "task" {
  family                        = "service"
  network_mode                  = "awsvpc"
  requires_compatibilities      = ["FARGATE", "EC2"]
  cpu                           = 512
  memory                        = 2048
  container_definitions         = jsonencode([
    {
      name      = "nginx-app"
      image     = "nginx:latest"
      cpu       = 512
      memory    = 2048
      essential = true  # if true and if fails, all other containers fail. Must have at least one essential
      portMappings = [
        {
          containerPort = 80
          hostPort      = 80
        }
      ]
    }
  ])
}

Argument container_definitions can also use Terraform function file. This makes the code easier to read. Here is an example of a Terraform file using the function, and here is the JSON file the function uses as the argument.

Service

Now we can finally deploy the service – create the container and use it

resource "aws_ecs_service" "service" {
  name              = "service"
  cluster           = aws_ecs_cluster.ping.id
  task_definition   = aws_ecs_task_definition.task.id
  desired_count     = 1
  launch_type       = "FARGATE"
  platform_version  = "LATEST"

  network_configuration {
    assign_public_ip  = true
    security_groups   = [aws_security_group.sg.id]
    subnets           = [aws_subnet.subnet.id]
  }
  lifecycle {
    ignore_changes = [task_definition]
  }
}

The service is attached to a specific cluster and specific task definition. The launch type is FARGATE. Public IP will be assigned and the service will be in a specific subnet and secured by a specific security group.

Once all is provisioned we can check the result:

Go into AWS Console and find service ECS. Make sure you are in the right region. Click on clusters and find the cluster and click on it. Under tasks you should se the provisioned container, something similar to this:

Clicking on the task ID should give you task details. Under Network is the public IP. Copy it and visit it. Nginx should welcome you.

Docker, AWS, Python3 and boto3

Docker, AWS, Python3 and boto3

The idea behind is to have an independent environment to integrate Amazon Web Services’ objects and services with Python applications.

The GitHub repository with example can be found here. The README.md will probably serve you better than this blog post if you just want to get started.

The environment is offered in a form of a Docker container, which I am running on Windows 10. The above repository has a DockerFile available so the container can be build wherever.

Python 3 is the language of choice to work against the AWS and for that library boto3 is needed. This is an AWS SDK for Python and it is used to integrate Python applications with AWS services.

Bare minimum

To get started, all is needed is access key and secret key (which requires an IAM user with assigned policies), Python and installed boto3.

The policies the user gets assigned are going to reflect in the Python code. It can be frustrating at the beginning to assign the right policies so maybe for the purpose of testing, give the user all rights to a service and narrow it down later.

Where to begin

The best service to begin with is object data storage AWS S3 where you can manipulate with buckets (folders) and objects (files). And you also see immediate results in AWS console. Costs are also minimal and there are no services running “under” S3 that need attention first. My repository has a simple Python package which lists all available buckets.

Credentials and sessions

To integrate Python application and AWS services, an IAM user is needed and users access key and service key. They can be provided in different ways, in this case, I have used sessions – which allow users (dev, test, prod…) to change at runtime. This example of credentials file with sessions gives the general idea about how to create multiple sessions.

The Python test file shows how to initialize a session.

Exception handling

Handling exceptions in Python3 and with boto3 is demonstrated in the test package. Note that the excpetion being caught is a boto3 exception.

Further work

The environment is set up, PyCharm can be used for software development while Docker can execute the tests.

There is nothing stopping you from developing a Python application.

After gaining some confidence, it would be smart to check the policies and create policies that allow a user or group excatly what they need to be allowed.

Dilemma

How far will boto3 take one organization? Is it smart to consider using, for example, Terraform when building VPC and launching EC2 instances?

It is worth making that decision and use an Infrastructure-as-Code tool on a higher level to automate faster. And prehaps use boto3 to do more granular work like manipulating objects in S3 or dealinh with users and policies.