Blog
3 September 2019
Ruslan Baimuhametov, software engineer

Running JUnit tests with GitLab CI for Kubernetes-hosted apps

Everyone knows how software testing is important and essential — I believe many readers already do that all the time. Surprisingly, it is not that easy to find a good example of setting up two well-known options for related — I mean, CI/CD — issues: our favorite GitLab and JUnit. Let’s fill this gap!

Background

First, I’ll define the full context:

  • Since all our applications are running in Kubernetes, I’ll cover testing in the related infrastructure only.
  • We build and deploy images using werf (such an approach also means that Helm is naturally involved in the pipeline).
  • I won’t go into details of testing itself: in our case testing is implemented on the customer’s side, we only ensure its proper running (and presence of the corresponding report in the merge request).

Here is the general order of actions concerning our example:

  1. Building an application — we will omit the description of this step.
  2. Deploying an application to the separate namespace of the Kubernetes cluster and running tests.
  3. Retrieving artifacts and parsing JUnit report via GitLab.
  4. Deleting a previously created namespace.

Now let’s move on to the implementation!

Getting started

GitLab CI

We’ll start with the part of .gitlab-ci.yaml describing the deployment of the application and running the tests. The code is rather long, therefore I inserted detailed comments into it:

variables:
# declare the version of werf we are going to use
  WERF_VERSION: "1.0 beta"

.base_deploy: &base_deploy
  script:
# create the namespace in K8s if it isn’t there
   - kubectl --context="${WERF_KUBE_CONTEXT}" get ns ${CI_ENVIRONMENT_SLUG} || kubectl create ns ${CI_ENVIRONMENT_SLUG}
# load werf and deploy — please check docs for details
# (https://werf.io/how_to/gitlab_ci_cd_integration.html#deploy-stage)
    - type multiwerf && source <(multiwerf use ${WERF_VERSION})
    - werf version
    - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose)
    - werf deploy --stages-storage :local
      --namespace ${CI_ENVIRONMENT_SLUG}
      --set "global.commit_ref_slug=${CI_COMMIT_REF_SLUG:-''}"
# pass the `run_tests` variable
# it will be used during rendering of Helm release  
      --set "global.run_tests=${RUN_TESTS:-no}"
      --set "global.env=${CI_ENVIRONMENT_SLUG}"
# set the timeout (some tests are rather long)
# and pass it to the release
      --set "global.ci_timeout=${CI_TIMEOUT:-900}"
     --timeout ${CI_TIMEOUT:-900}
  dependencies:
    - Build

.test-base: &test-base
  extends: .base_deploy
  before_script:
# create the directory for the coming report
# using $CI_COMMIT_REF_SLUG
    - mkdir /mnt/tests/${CI_COMMIT_REF_SLUG} || true
# forced workaround 'cause GitLab requires artifacts
# in its build directory
    - mkdir ./tests || true
    - ln -s /mnt/tests/${CI_COMMIT_REF_SLUG} ./tests/${CI_COMMIT_REF_SLUG}
  after_script:
# delete the release with a Job (and possibly its infrastructure) 
# after the tests are finished
    - type multiwerf && source <(multiwerf use ${WERF_VERSION})
    - werf version
    - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose)
    - werf dismiss --namespace ${CI_ENVIRONMENT_SLUG} --with-namespace
# we allow failures to happen, but you can decide otherwise
  allow_failure: true
  variables:
    RUN_TESTS: 'yes'
# set the context in werf
# (https://werf.io/how_to/gitlab_ci_cd_integration.html#infrastructure)
    WERF_KUBE_CONTEXT: 'admin@stage-cluster'
  tags:
# using runner with the `werf-runner` tag
    - werf-runner
  artifacts:
# you first have to create an artifact to see it in the pipeline 
# and download (e.g. for a more thoughtful study)
    paths:
      - ./tests/${CI_COMMIT_REF_SLUG}/*
# artifacts older than one week will be deleted
    expire_in: 7 day
# note: these lines are responsible for parsing the report by GitLab
    reports:
      junit: ./tests/${CI_COMMIT_REF_SLUG}/report.xml

# to make it simple, only two stages are shown here
# you will have more of them in real life,
# at least because of deploying
stages:
  - build
  - tests

build:
  stage: build
  script:
# build stage - check details in the werf docs:
# (https://werf.io/how_to/gitlab_ci_cd_integration.html#build-stage)
    - type multiwerf && source <(multiwerf use ${WERF_VERSION})
    - werf version
    - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose)
    - werf build-and-publish --stages-storage :local
  tags:
    - werf-runner
  except:
    - schedules

run tests:
  <<: *test-base
  environment:
# the point of naming the namespace
# (https://docs.gitlab.com/ce/ci/variables/predefined_variables.html)
    name: tests-${CI_COMMIT_REF_SLUG}
  stage: tests
  except:
    - schedules

Kubernetes

Now it is time to create a YAML file with the description of a Job (tests-job.yaml) and all the necessary Kubernetes resources in the .helm/templates directory. See the explanation below:

{{- if eq .Values.global.run_tests "yes" }}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: tests-script
data:
  tests.sh: |
    echo "======================"
    echo "${APP_NAME} TESTS"
    echo "======================"

    cd /app
    npm run test:ci
    cp report.xml /app/test_results/${CI_COMMIT_REF_SLUG}/

    echo ""
    echo ""
    echo ""

    chown -R 999:999 /app/test_results/${CI_COMMIT_REF_SLUG}
---
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ .Chart.Name }}-test
  annotations:
    "helm.sh/hook": post-install,post-upgrade
    "helm.sh/hook-weight": "2"
    "werf/watch-logs": "true"
spec:
  activeDeadlineSeconds: {{ .Values.global.ci_timeout }}
  backoffLimit: 1
  template:
    metadata:
      name: {{ .Chart.Name }}-test
    spec:
      containers:
      - name: test
        command: ['bash', '-c', '/app/tests.sh']
{{ tuple "application" . | include "werf_container_image" | indent 8 }}
        env:
        - name: env
          value: {{ .Values.global.env }}
        - name: CI_COMMIT_REF_SLUG
          value: {{ .Values.global.commit_ref_slug }}
       - name: APP_NAME
          value: {{ .Chart.Name }}
{{ tuple "application" . | include "werf_container_env" | indent 8 }}
        volumeMounts:
        - mountPath: /app/test_results/
          name: data
        - mountPath: /app/tests.sh
          name: tests-script
          subPath: tests.sh
      tolerations:
      - key: dedicated
        operator: Exists
      - key: node-role.kubernetes.io/master
        operator: Exists
      restartPolicy: OnFailure
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: {{ .Chart.Name }}-pvc
      - name: tests-script
        configMap:
          name: tests-script
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ .Chart.Name }}-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi
  storageClassName: {{ .Chart.Name }}-{{ .Values.global.commit_ref_slug }}
  volumeName: {{ .Values.global.commit_ref_slug }}

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .Values.global.commit_ref_slug }}
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Mi
  local:
    path: /mnt/tests/
  nodeAffinity:
   required:
     nodeSelectorTerms:
     - matchExpressions:
       - key: kubernetes.io/hostname
         operator: In
         values:
         - kube-master
  persistentVolumeReclaimPolicy: Delete
  storageClassName: {{ .Chart.Name }}-{{ .Values.global.commit_ref_slug }}
{{- end }}

What resources does this configuration describe? We’re creating a unique namespace for the application during deployment (as stated in the .gitlab-ci.yamltests-${CI_COMMIT_REF_SLUG}) and deploying several components there:

  1. ConfigMap with a test script;
  2. A Job with a description of the pod and specified that actually runs the tests;
  3. PV and PVC where the test data will be stored.

Note the initial if statement at the beginning of the manifest. To prevent other YAML files of Helm chart with an application from deploying, you have to insert the following reverse condition:

{{- if ne .Values.global.run_tests "yes" }}
---
Hey, I'm another YAML
{{- end }}

Yet if some tests require additional infrastructure (like Redis, RabbitMQ, Mongo, PostgreSQL…), then you can leave corresponding YAML files enabled and deploy them into the testing environment (of course, feel free to modify them as you see fit).

Final touch

Currently, building and deploying with werf are supported via build server (with gitlab-runner) only. However, the testing pod runs on the master node. In these circumstances, you have to create /mnt/tests folder on the master node and mount it to the runner, e.g. via NFS. A detailed example is available in the K8s docs.

We’ll get the following result:

user@kube-master:~$ cat /etc/exports | grep tests
/mnt/tests    IP_gitlab-builder/32(rw,nohide,insecure,no_subtree_check,sync,all_squash,anonuid=999,anongid=998)

user@gitlab-runner:~$ cat /etc/fstab | grep tests
IP_kube-master:/mnt/tests    /mnt/tests   nfs4    _netdev,auto  0       0

The other possibility is to create a shared NFS directory directly on the gitlab-runner and then mount it to pods.

Explanatory note

You may ask, what is the point of creating a Job if you could easily run a test script right in the shell-runner? The answer is obvious:

Some tests require the infrastructure (like MongoDB, RabbitMQ, PostgreSQL, and so on) to check the functionality. Our approach is a unified solution that allows for easy integration of additional instances. As a bonus, we get a standard deployment approach (even if using NFS and extra mounting of directories).

Result

What would be the result of applying the prepared configuration?

The merge request will show a summary of tests executed in its previous pipeline:

Click on the error to get more info:

NB: The attentive reader will notice that we are testing Node.js application, however there is a .NET one on the screenshots. Don’t be surprised: while we have found no issues in the original application (at the moment of writing this article), some of them have been revealed in another one.

Conclusion

It’s so easy, as you can see!

If you already have a working builder for a shell and don’t need Kubernetes, you can supplement it with testing even more effortlessly than described here. There are examples for Ruby, Go, Gradle, Maven, and some other products in the GitLab CI documentation.

Afterword

This article has been originally posted on Medium. New texts from our engineers are placed here, on blog.flant.com. Please follow our Twitter or subscribe below to get last updates!