Continuous Deployment Pt. 2 - Deploying Docker Containers with Ansible from GitLab

Let us assume, you have a GitLab running somewhere and another server, you want to deploy Docker images to. There a things that need to be taken care of. Maybe you want to deploy the same software in parallel (think of test systems, for example), maybe you need to orchestrate not one but many docker images into one working system? Also, the target system is probably somewhere "remote" and you want to use a secure channel to deploy your software.

flavor wheel

In this article, we are going to show how this can be done using GitLab, Ansible, SSH, Docker and a Docker registry.

Basic knowledge of GitLab, Docker, Docker-Compose and Ansible will be helpful to follow this article. We asssume, you have GitLab and a Docker registry at your disposal.

To enable GitLab to dockerize an application, push images to a registry and finaly deploy images to a remote system, some preparation is required. First, we will bringt the puzzle pieces together and in the end combine them all in the .gitlab-ci.yml file.

Setting up GitLab

Lets start with variables and secrets which we do not want to store in the git repository. For those kinds of secrets, GitLab provides variables, in essence a simple key-value store. You find the variables under Settings / CI / CD of your project. Each variable consists of five properties:

  • Type: Either variable or file. For variables, GitLab will assign the value directly to the key, while for a file, GitLab will assign the path to the file to the key of the variable. This sounds a bit tricky, but its actually quiet nifty.
  • Key: The identifier to access variables by.
  • Value: The value of the variable, for example a private key or password.
  • State: Toggles whether other projects are able to access this variable.
  • Masked: Single line variables can be masked in the log output. This is perfect for passwords.

Among the secrets we want to put in the variables section are the Docker repository username (DOCKER_REGISTRY_USER) and password (DOCKER_REGISTRY_PW) and the SSH private key (ANSIBLE_KEY), which we will use to create a tunnel with a remote system. Also, we decided to put the SSH configuration (SSH_CFG) inside the GitLab variables section. Thus having an externalized, usable "stub" which we can use.

Gitlab Variables

All but one of those values should be obvious. The one remaining is the SSH configuration. Basically, it is the same as the config file in the .ssh directory.

Host *
   StrictHostKeyChecking no
   
Host ourTestSystem
	HostName <IP-Address>
	IdentityFile ./keys/keyfile
	User <User>

We decided to turn off strict host key checking. In doing so, SSH will not care whether the fingerprint of the host has changed or is not known. We did this, because we recreate our machines once in a while and have not yet incorported a step to distribute the key fingerprints into the process.

If you wanted to enable host key checking, you could for example add another GitLab variable and write it to the known_hosts file. We're skipping this step.

Since this is a full fledged SSH config file, you can use ProxyJump and other commands, too.

Ansible Playbook

The Ansible playbook describes which steps to carry out on the target machine.

This article builds on top of Part 1 of the series. In the previous part, we describe how to use Traefik to make Docker containers available automatically. Thus, we will not explain the details of the labels used for Traefik here. They are also not essential for this example, so if you are not using Traefik, just leave them away.

In (1) we create a Docker network, the name (network_name) and subnet (subnet) are read from the configuration provided by the inventory file. This way, to following Docker container can communicate given their own network, isolated from other containers. Sometimes it is good to know which resources will be used in advance. So, here you can exert control of over which network addresses shall be used.

In (2) first we log in to the Docker registry, so Ansible is able to pull the image from our private Docker registry. For details of the docker_login command, please see the docker_login documentation. In our case, we simply set the registry url, and provide a username and password. Finally, we instruct the command to refresh existing authentications - in case the username or password change.

The most interesting part, is the part which resembles a Docker-Compose file. Here it is the docker_container Ansible command, its syntax is slightly different, but recognizable. For details have a look at the docker_container documentation. Notice, that we specify a Docker registry, from which the Docker image shall be pulled. Also, we add the traefik_proxy network and the network specified in (1). The traefik_proxy is for working with Traefik, if you are not using it, skip it.

The last part (3), takes care of clean up work: removing exited containers, untagged Docker images and volumes. If you don't want the script to throw out all your old containers and unused images, you need to update this section.

---
- hosts: dockerAppNet
  tasks:                                                     # (1)
  - name: Create docker network
    docker_network:
      name: "{{ network_name }}"
      ipam_config:
        - subnet: "{{ subnet }}.0/24"
          gateway: "{{ subnet }}.1"

- hosts: dockerApp
  tasks:                                                     # (2)
  - name: Log into docker registry and force re-authorization
    docker_login:
      registry: "{{ registry_url }}"
      username: "{{ registry_user }}"
      password: "{{ registry_password }}"
      reauthorize: yes
  - name: Create website container
    docker_container:
      name: colamda-website
      image: "{{ registry_url }}/colamda-website:latest"
      pull: yes
      restart_policy: always
      hostname: colamda-website
      exposed_ports:
        - "80"
      networks:
        - name: "traefik_proxy"
        - name: "{{ network_name }}"
          ipv4_address: "{{ subnet }}.2"
      purge_networks: yes
      labels:
        traefik.enable: "true"
        traefik.backend: "colamda-website"
        traefik.frontend.rule: "Host:www.colamda.de"
        traefik.port: "80"
        traefik.docker.network: "traefik_proxy"

- hosts: dockerAppClean 
  tasks:                                                     # (3)
  - name: Removing exited containers
    shell: docker ps -a -q -f status=exited | xargs --no-run-if-empty docker rm --volumes
  - name: Removing untagged images
    shell: docker images | awk '/^<none>/ { print $3 }' | xargs --no-run-if-empty docker rmi -f
  - name: Removing volume directories
    shell: docker volume ls -q --filter="dangling=true" | xargs --no-run-if-empty docker volume rm

If you want to deploy a system consisting of multiple containers, you can do this as well. Just specify more containers and join them in the same network.

Inventory File

The Ansible inventory file is used to provide parameters for the Ansible playbook. Only a few parameters are set, to show how the process is working in general. As you can see, for the subnet (2), only the first three octets are given, the last one and network mask are specified in the playbook above. The network name is specified (3) and the Docker registry is set (4). Also, we're passing the GitLab variables for the Docker registry user name (5) and password (6) on to Ansible, so it can log in to it. Notice, that you can use the same mechanism, to convey the GitLab build number (or other values) in your Ansible script to deploy a specific Docker image.

In (1) notice how the value given for ansible_ssh_host corresponds to the name given in the SSH config above (ourTestSystem). This is SSH host name and determines which SSH config is going to be used to connect to the target system.

[dockerApp]
dockerApp ansible_ssh_host=ourTestSystem ansible_python_interpreter=/usr/bin/python3  # (1) 

[dockerApp:vars]
subnet=10.42.7                                               # (2)
network_name=test-net                                        # (3)
registry_url=registry.your.server                            # (4)
registry_user={{ lookup('env','DOCKER_REGISTRY_USER') }}     # (5)
registry_password={{ lookup('env','DOCKER_REGISTRY_PW') }}   # (6)

[dockerAppNet:children]
dockerApp

[dockerAppNetClean:children]
dockerApp

.gitlab-ci.yml

Finally, its time to fit all pieces together. In an Docker in Docker setup, the following .gitlab-ci.yml file will do the following things:

  1. Build a Docker image and push the image to a registry.
  2. Run the Ansible playbook.

To keep things simple, in this example we assume everything has been built (as in compiled, not dockerize) already - or there is nothing to be built at all.

1) Building and Pushing the Docker image

The first part of the gitlab-ci file is not that exciting. First (1), we biuld the Docker image. In your environment, you may want to tag your Docker images with a more meaningful tag instead of just latest. The GitLab documentation for gitlab variables and gitlab predefined variables will provide further information. Depending on how you build, you could set a version number, build number or other values as tag.

The second step is to log in to our Docker registry. This step is required only, if you have a private repository. In our case, the repository is protected by BasicAuth, and in (2) the Docker login command is used with the username and password from the GitLab variables. Since we specified DOCKER_REGISTRY_PW as masked, GitLab will mask the password in the log files.

The last command (3) now pushes the newly created Docker image to the registry.

image: docker:latest

stages:
  - dockerize
  - deploy

dockerize:
  stage: dockerize
  image: docker:stable
  script:
    - docker build -t registry.your.server/your-image:latest .   # (1)
    - docker login https://registry.your.server --username ${DOCKER_REGISTRY_USER} --password ${DOCKER_REGISTRY_PW} # (2)
    - docker push registry.your.server/your-image:latest         # (3)

2) Running the Ansible Playbook

Once the Docker image has been pushed to the registry, we can execute the Ansible script deploying the image to the specified machine and start it.

We're using an own Ansible in Docker image (1) to run Ansible playbooks. The image we are using was based on an Dockerfile we found on the internet (see here for the original). As we are using it ourselves, we are going to continue to update it. This image is publicly available from the Docker hub, so GitLab will download it automatically. Visit the Docker hub page for further details cbhek/ansible-worker.

Way above, we spcified SSH_CFG to be a file in the GitLab variables section. This means, the variable SSH_CFG contains the path to the file with the content. With the command (2) the config file is created in the project directory.

(3) creates a directory for the SSH private key to be copied to (4). The private key is a file (in the sense of a GitLab variable) too, so it is basically the same as in (2). Next, the permissions are updated (5). This seems like a negligible step, but remember setting up SSH on your local machine: SSH will not accept keys, whose permissions are set too openly. Thus we strip the permissions for all other users except the current user.

Finally, running the Ansible playbook (6). We switch to the project directory, and provide the inventory file and set SSH parameters. The SSH parameters could be moved into an ansible.cfg file, too. However, Ansible will refuse reading certain SSH parameters from a world-readable config file. Thus, we're saving us the trouble and put them in the command. Notice how we provide the path to the SSH config file (ssh.cfg, which contains the contents of the file of the GitLab variable SSH_CFG).

After the playbook has run, we're deleting the private key and ssh.cfg (7).

deploy-prod:
  when: manual
  stage: deploy
  image: cbhek/ansible-worker:1.0.0                              # (1)
  script:
    - cat ${SSH_CFG} > "$CI_PROJECT_DIR/ssh.cfg"                 # (2)
    - mkdir -p "$CI_PROJECT_DIR/keys"                            # (3)
    - cat ${ANSIBLE_KEY} > "$CI_PROJECT_DIR/keys/keyfile"        # (4)
    - chmod og-rwx "$CI_PROJECT_DIR/keys/keyfile"                # (5)
    - cd $CI_PROJECT_DIR && ansible-playbook -i inventory --ssh-extra-args="-F $CI_PROJECT_DIR/ssh.cfg -o ControlMaster=auto -o ControlPersist=30m" playbook.yml                             # (6)
  after_script:
    - rm -r "$CI_PROJECT_DIR/keys"                               # (7)
    - rm "$CI_PROJECT_DIR/ssh.cfg"

Execute the pipeline and see how the built image is pushed to the registry, Ansible kicks into action, logging into the remote server, pulling the docker container and starting it. That's it. We're done.

Further Reading