DevOps as a Service from Flant Dedicated 24x7x365 support Discover the benefits
3 September 2019
Ruslan Baimuhametov, software engineer

Running JUnit tests with GitLab CI for Kubernetes-hosted apps

Everyone knows how software testing is important and essential — I believe many readers already do that all the time. Surprisingly, it is not that easy to find a good example of setting up two well-known options for related — I mean, CI/CD — issues: our favorite GitLab and JUnit. Let’s fill this gap!


First, I’ll define the full context:

  • Since all our applications are running in Kubernetes, I’ll cover testing in the related infrastructure only.
  • We build and deploy images using werf (such an approach also means that Helm is naturally involved in the pipeline).
  • I won’t go into details of testing itself: in our case testing is implemented on the customer’s side, we only ensure its proper running (and presence of the corresponding report in the merge request).

Here is the general order of actions concerning our example:

  1. Building an application — we will omit the description of this step.
  2. Deploying an application to the separate namespace of the Kubernetes cluster and running tests.
  3. Retrieving artifacts and parsing JUnit report via GitLab.
  4. Deleting a previously created namespace.

Now let’s move on to the implementation!

Getting started

GitLab CI

We’ll start with the part of .gitlab-ci.yaml describing the deployment of the application and running the tests. The code is rather long, therefore I inserted detailed comments into it:

# declare the version of werf we are going to use
  WERF_VERSION: "1.0 beta"

.base_deploy: &base_deploy
# create the namespace in K8s if it isn’t there
   - kubectl --context="${WERF_KUBE_CONTEXT}" get ns ${CI_ENVIRONMENT_SLUG} || kubectl create ns ${CI_ENVIRONMENT_SLUG}
# load werf and deploy — please check docs for details
# (
    - type multiwerf && source <(multiwerf use ${WERF_VERSION})
    - werf version
    - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose)
    - werf deploy --stages-storage :local
      --namespace ${CI_ENVIRONMENT_SLUG}
      --set "global.commit_ref_slug=${CI_COMMIT_REF_SLUG:-''}"
# pass the `run_tests` variable
# it will be used during rendering of Helm release  
      --set "global.run_tests=${RUN_TESTS:-no}"
      --set "global.env=${CI_ENVIRONMENT_SLUG}"
# set the timeout (some tests are rather long)
# and pass it to the release
      --set "global.ci_timeout=${CI_TIMEOUT:-900}"
     --timeout ${CI_TIMEOUT:-900}
    - Build

.test-base: &test-base
  extends: .base_deploy
# create the directory for the coming report
    - mkdir /mnt/tests/${CI_COMMIT_REF_SLUG} || true
# forced workaround 'cause GitLab requires artifacts
# in its build directory
    - mkdir ./tests || true
    - ln -s /mnt/tests/${CI_COMMIT_REF_SLUG} ./tests/${CI_COMMIT_REF_SLUG}
# delete the release with a Job (and possibly its infrastructure) 
# after the tests are finished
    - type multiwerf && source <(multiwerf use ${WERF_VERSION})
    - werf version
    - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose)
    - werf dismiss --namespace ${CI_ENVIRONMENT_SLUG} --with-namespace
# we allow failures to happen, but you can decide otherwise
  allow_failure: true
    RUN_TESTS: 'yes'
# set the context in werf
# (
    WERF_KUBE_CONTEXT: 'admin@stage-cluster'
# using runner with the `werf-runner` tag
    - werf-runner
# you first have to create an artifact to see it in the pipeline 
# and download (e.g. for a more thoughtful study)
      - ./tests/${CI_COMMIT_REF_SLUG}/*
# artifacts older than one week will be deleted
    expire_in: 7 day
# note: these lines are responsible for parsing the report by GitLab
      junit: ./tests/${CI_COMMIT_REF_SLUG}/report.xml

# to make it simple, only two stages are shown here
# you will have more of them in real life,
# at least because of deploying
  - build
  - tests

  stage: build
# build stage - check details in the werf docs:
# (
    - type multiwerf && source <(multiwerf use ${WERF_VERSION})
    - werf version
    - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose)
    - werf build-and-publish --stages-storage :local
    - werf-runner
    - schedules

run tests:
  <<: *test-base
# the point of naming the namespace
# (
    name: tests-${CI_COMMIT_REF_SLUG}
  stage: tests
    - schedules


Now it is time to create a YAML file with the description of a Job (tests-job.yaml) and all the necessary Kubernetes resources in the .helm/templates directory. See the explanation below:

{{- if eq "yes" }}
apiVersion: v1
kind: ConfigMap
  name: tests-script
data: |
    echo "======================"
    echo "${APP_NAME} TESTS"
    echo "======================"

    cd /app
    npm run test:ci
    cp report.xml /app/test_results/${CI_COMMIT_REF_SLUG}/

    echo ""
    echo ""
    echo ""

    chown -R 999:999 /app/test_results/${CI_COMMIT_REF_SLUG}
apiVersion: batch/v1
kind: Job
  name: {{ .Chart.Name }}-test
    "": post-install,post-upgrade
    "": "2"
    "werf/watch-logs": "true"
  activeDeadlineSeconds: {{ }}
  backoffLimit: 1
      name: {{ .Chart.Name }}-test
      - name: test
        command: ['bash', '-c', '/app/']
{{ tuple "application" . | include "werf_container_image" | indent 8 }}
        - name: env
          value: {{ }}
        - name: CI_COMMIT_REF_SLUG
          value: {{ }}
       - name: APP_NAME
          value: {{ .Chart.Name }}
{{ tuple "application" . | include "werf_container_env" | indent 8 }}
        - mountPath: /app/test_results/
          name: data
        - mountPath: /app/
          name: tests-script
      - key: dedicated
        operator: Exists
      - key:
        operator: Exists
      restartPolicy: OnFailure
      - name: data
          claimName: {{ .Chart.Name }}-pvc
      - name: tests-script
          name: tests-script
apiVersion: v1
kind: PersistentVolumeClaim
  name: {{ .Chart.Name }}-pvc
  - ReadWriteOnce
      storage: 10Mi
  storageClassName: {{ .Chart.Name }}-{{ }}
  volumeName: {{ }}

apiVersion: v1
kind: PersistentVolume
  name: {{ }}
  - ReadWriteOnce
    storage: 10Mi
    path: /mnt/tests/
     - matchExpressions:
       - key:
         operator: In
         - kube-master
  persistentVolumeReclaimPolicy: Delete
  storageClassName: {{ .Chart.Name }}-{{ }}
{{- end }}

What resources does this configuration describe? We’re creating a unique namespace for the application during deployment (as stated in the .gitlab-ci.yamltests-${CI_COMMIT_REF_SLUG}) and deploying several components there:

  1. ConfigMap with a test script;
  2. A Job with a description of the pod and specified that actually runs the tests;
  3. PV and PVC where the test data will be stored.

Note the initial if statement at the beginning of the manifest. To prevent other YAML files of Helm chart with an application from deploying, you have to insert the following reverse condition:

{{- if ne "yes" }}
Hey, I'm another YAML
{{- end }}

Yet if some tests require additional infrastructure (like Redis, RabbitMQ, Mongo, PostgreSQL…), then you can leave corresponding YAML files enabled and deploy them into the testing environment (of course, feel free to modify them as you see fit).

Final touch

Currently, building and deploying with werf are supported via build server (with gitlab-runner) only. However, the testing pod runs on the master node. In these circumstances, you have to create /mnt/tests folder on the master node and mount it to the runner, e.g. via NFS. A detailed example is available in the K8s docs.

We’ll get the following result:

user@kube-master:~$ cat /etc/exports | grep tests
/mnt/tests    IP_gitlab-builder/32(rw,nohide,insecure,no_subtree_check,sync,all_squash,anonuid=999,anongid=998)

user@gitlab-runner:~$ cat /etc/fstab | grep tests
IP_kube-master:/mnt/tests    /mnt/tests   nfs4    _netdev,auto  0       0

The other possibility is to create a shared NFS directory directly on the gitlab-runner and then mount it to pods.

Explanatory note

You may ask, what is the point of creating a Job if you could easily run a test script right in the shell-runner? The answer is obvious:

Some tests require the infrastructure (like MongoDB, RabbitMQ, PostgreSQL, and so on) to check the functionality. Our approach is a unified solution that allows for easy integration of additional instances. As a bonus, we get a standard deployment approach (even if using NFS and extra mounting of directories).


What would be the result of applying the prepared configuration?

The merge request will show a summary of tests executed in its previous pipeline:

Click on the error to get more info:

NB: The attentive reader will notice that we are testing Node.js application, however there is a .NET one on the screenshots. Don’t be surprised: while we have found no issues in the original application (at the moment of writing this article), some of them have been revealed in another one.


It’s so easy, as you can see!

If you already have a working builder for a shell and don’t need Kubernetes, you can supplement it with testing even more effortlessly than described here. There are examples for Ruby, Go, Gradle, Maven, and some other products in the GitLab CI documentation.


This article has been originally posted on Medium. New texts from our engineers are placed here, on Please follow our Twitter or subscribe below to get last updates!