DevOps as a Service from Flant Dedicated 24x7x365 support Discover the benefits
23 August 2019
Timofey Kirillov, software developer

Improve your CI/CD experience with werf and existing Dockerfiles

Better late than never. The story of how we almost made a major mistake by not implementing support for building images using regular Dockerfiles.

werf is a GitOps tool that integrates nicely into any CI/CD system and provides a complete application lifecycle management, allowing you to:

  • build and publish images,
  • deploy an application into Kubernetes,
  • cleanup unused images by policies.

Here is the philosophy of our tool: combine low-level instruments into a single unified system to let DevOps engineers control applications. Existing ready-to-use tools (like Helm and Docker) should be employed if possible. But what if there is no suitable solution for a task? The answer is simple: write and maintain your own tool to get things done.

Background: custom image builder

The same story happened with the werf image builder: Dockerfile — de facto standard for describing the process of building images — was fairly restricted for our needs. This issue became critical at early stages of our project. Developing a tool for dockerizing applications, we quickly realized that Dockerfile isn’t suitable for such specific tasks as:

  1. Following the standard workflow for building typical small web applications: a) to install system-wide application dependencies, b) to install bundle of application-specific libraries, c) to build assets, d) and the most important part — to update the code in the image quickly and efficiently.
  2. The builder should create a new layer by applying a patch to modified files when changes are made.
  3. The dependent stage must be rebuilt if certain files have been modified.

So, that was the list of our initial requirements. Today our builder has many additional neat features.

All in all, it didn’t take long for us to start developing custom DSL using the preferred programming language (see below). It had to meet defined objectives, describe the building process by stages and determine the dependencies of stages on files. It was supplemented by the respective builder that turned DSL into the final objective — a ready-to-use Docker image. At first, we have implemented DSL in Ruby and then rewritten it in the form of the YAML file as we switched to Golang.

Ruby config for werf — the old one (project itself was known as dapp at these times)
YAML config for werf —the actual one

The concept of a builder has changed over time. In the beginning, we simply generated some temporary Dockerfile “on the fly” using our configuration. Then we started to run building instructions in temporary containers and to make a commit.

NB: At the moment, our Stapel builder that uses YAML configuration (demonstrated above) has turned into a fairly powerful tool by itself. While its detailed description deserves an article on its own you can find more details in the docs for now.

Wait a minute!

A little while later a realization came that we made a serious mistake by not adding an ability to build images using standard Dockerfiles and to integrate them into the established infrastructure for full application management (i.e. for building, deploying and deleting images). The question “How could we possibly do a tool for deploying images to Kubernetes without supporting Dockerfile — a prevailing way of describing images for most projects?” still bothers our minds…

Instead of answering this question, we propose a solution to it. What if you already have a Dockerfile (or a set of Dockerfiles) and want to use werf?

NB: By the way, why would you want to use werf at all? At least, it has a variety of nice features to enhance and glue your CI/CD processes, such as:

  • complete application management cycle, including deleting of images;
  • the ability to build several images using a single config;
  • the improved process of deploying Helm-compatible charts.

A full list of features is available on the project page.

So, until recently we would kindly ask you to port your Dockerfiles to our config format if you’re interested in using werf. But now we are happy to tell you, “Let the werf build your Dockerfiles!”


The first full implementation of this feature has been introduced in werf v1.0.3-beta.1.

The general routine is quite simple: the user specifies the path to the existing Dockerfile in the werf config and then starts werf with werf build command. That is all: werf will build the image.

Here is a hypothetical example. Let’s define the following Dockerfile in the application’s root:

FROM ubuntu:18.04
RUN echo Building ...

Then we define a werf.yaml which uses that Dockerfile:

configVersion: 1
project: dockerfile-example
image: ~
dockerfile: ./Dockerfile

That’s it! Now we can execute werf build:

By the way, you can also define the following werf.yaml for the simultaneous building of images using various Dockerfiles:

configVersion: 1
project: dockerfile-example
image: backend
dockerfile: ./dockerfiles/Dockerfile-backend
image: frontend
dockerfile: ./dockerfiles/Dockerfile-frontend

Passing of additional build parameters, such as --build-arg and --add-host, is also supported via the werf config. Here is a link to a full description of the Dockerfile image configuration.

How it works

The common Docker caching of local layers is active during image building. What’s important is that werf also integrates the Dockerfile config into its infrastructure. What does that mean?

  1. Well, every image built with Dockerfile includes a specific stage named (you can read more about stages in werf here).
  2. For the stage, werf calculates a signature that depends on the contents of the Dockerfile configuration. Changes in the Dockerfile configuration lead to a change in the signature of the stage. In this case, werf initiates the rebuilding of this stage using new Dockerfile config. If signature stays the same, werf uses the cached image.
  3. You can publish the built images with (or ) and use them for deploying to Kubernetes. Published images in the Docker Registry will be cleaned via common werf mechanisms. It means old images (older than N days) and images associated with non-existent Git branches will be deleted automatically and other policies may be applied.

You can learn more about these werf’s peculiarities in the corresponding docs:

Tips and caveats

1. ADD does not support external URLs

Currently, the ADD parameter does not support external URLs. Werf will not initiate the rebuilding process in response to the resource’s change at the specified URL. We plan to add this feature soon.

2. You cannot include .git into an image

In fact, adding .git folder into your image is a bad idea, and here is why:

  • The presence of in the final image violates the 12-factor app ideas. The final image has to be linked with a single commit; it should not be allowed to run on an arbitrary commit.
  • increases the size of an image (the repository can grow in size because large files were once added to it and later deleted). In contrast, the size of a working tree associated with a specific commit will not depend on the history of Git operations. Furthermore, the addition and subsequent deletion of the folder from the final image won’t work: the new layer will be generated anyway (this is just how Docker works).
  • Docker may launch a needless rebuild even though the same commit (yet originated from different working trees) is being processed. As an example, GitLab creates separate cloned folders in when a parallel building is enabled. Unnecessary rebuilding is caused by the differences of folders in various cloned versions of the same repository (even if we build the exact same commit).

The last point directly impacts the usage of werf. Werf needs a building cache to run some commands (e.g. werf deploy). When executing those commands, werf calculates stage signatures for the images specified in the werf.yaml, so they must be present in the building cache or the command will fail. The dependence of stage signatures on the contents of .git means that the cache would be vulnerable to changes in irrelevant files — an intolerable fault for werf (more details are here).

Anyway, adding only specific and needful files via ADD & COPY instructions is still a good practice. It increases the efficiency and reliability of created Dockerfile, as well as improves the resilience of the cache (that was built via the aforementioned Dockerfile) against irrelevant changes in Git.


Our path of making a custom builder for special needs was hard, honest and straightforward: we preferred to develop our own solution with custom syntax instead of putting workarounds top of the default Dockerfile. Such an approach has its advantages: Stapel builder does a great job!

However, while creating the custom image builder, we completely overlooked the case of applying existing Dockerfiles. This flaw has been fixed now. In the future, we plan to enhance support for Dockerfiles along with our custom Stapel builder for distributed building and for building images inside a Kubernetes cluster (that is, by utilizing runners in Kubernetes similarly to kaniko).

So if you happen to have some good ol’ Dockerfiles, don’t hesitate to try werf out!


This article has been originally posted on Medium. New texts from our engineers are placed here, on Please follow our Twitter or subscribe below to get last updates!