
Local development in Kubernetes with werf 1.2 and minikube
This article discusses preparing and deploying a Kubernetes-based infrastructure for local development using a basic application as an example. Local development means you can change your app’s source code and instantly see how it works in K8s running on your computer.
What brought about this need? We are often challenged to find a balance between efficiency and cost of resources when providing infrastructure support services to our customers. Money is usually a limiting factor when implementing multiple environments (stage, dev, test, review, etc.) for developers. So, in order to reduce the costs, we use local environments on the developer workstations to supplement dynamic environments or substitute them.
This article is intended primarily for developers who need to check their work results quickly, but find it difficult to do so due to the inflexibility and rigidity of the testing/building system. You will learn how to create a local test environment with all the necessary tools using werf and minikube.
Introduction
Kubernetes-based development imposes certain limitations on applications in terms of networking between services, use of persistent storage, routing of traffic from ingresses to endpoints, etc. When the developer needs to obtain feedback on the application behavior in production-like conditions, all these infrastructural aspects affect the final result and speed of development. To test the code changes, what the developer usually has to do is commit them to a remote repository and deploy the application in some kind of test environment. This approach is well-known, popular, and works fine. But it does have several drawbacks, for instance:
- The pipeline can be pretty long, so it takes a long time for the changes to reach their destination.
- The number of environments may be limited, so you must check whether a particular environment is currently being used.
In order to speed up the process and render it more flexible, you can run the test environment locally on your PC. This article describes our approach to creating a local environment for developers with an infrastructure identical to the one used in the Kubernetes production cluster.
Briefly about werf
werf, our Open Source tool for building CI/CD processes, deserves a special mention. It uses the Git repository as a single source of truth (this principle is called Giterminism). With werf, you can build app containers, publish them to the registry, deploy Helm charts to Kubernetes, and track the status of the deployment process until it successfully completes.
We follow the Infrastructure as a Code approach via describing the infrastructure declaratively. As a result, the basic pattern of application deployment using werf is to store both the infrastructure description and the app code in the same Git repository. This renders the system fully deterministic and ensures that the resulting system state is identical to the one described in Git. The changes in the Git repository are automatically propagated to the target environment. The samit’sn be done with the local environment using werf. Of course, this is not the only possible course of action, but it’s the one that worked perfectly in our case.
Installing dependencies
Let’s see how you can run an application locally in Kubernetes using a basic application as an example. For that, you can use minikube, a lightweight Kubernetes distribution. minikube is a tool that allows you to run a single-node Kubernetes cluster on a local machine.
Note: You can find the results of all of the steps below in the example repository, along with the instructions on how to run the app so that you can see the results right away.
First, you have to install all the necessary components: Docker, minikube, werf, and kubectl (the links to the installation instructions for all these tools can be found below). Each tool supports a variety of Linux platforms as well as macOS and Windows. This tutorial was only tested with Ubuntu 20.04. However, you can easily adapt it to your own operating system/use scenario.
- Docker: https://docs.docker.com/engine/install/
- minikube: https://minikube.sigs.k8s.io/docs/start/
- werf: https://werf.io/installation.htmllet’sctl: https://kubernetes.io/docs/tasks/tools/
Test environment
Now that all the basic components are installed let’s move on to the application you will deploy and develop. To do so, you will use one of the werf demo applications written in Node.js. First, you have to adapt the repository used for deplproject’sthe minikube cluster. To do so, we will follow these steps:
- define all the build stages in
werf.yaml
; - configure the project’s Helm charts;
- configure the eLet’snment variables passed on for app Deployment;
- run the application in the test environment and see if it works.
Let’s look at the steps above in greater detail.
Building: werf vs. Docker
A bit of theory
By default, Docker stores all its layers and rebuilds them only if the files used in the corresponding layer have been changed. Such an approach is easy-to-follow, practical, and intuitive. The only downside is that the layers are stored locally on the machine on which the docker build
command was run.
If the build process is run as paGitLab’spipeline, the layers will end up on one of the available workers (for example, one of the gitlab-runners in GitLab’s case). Now, suppose you make a commit with minimal code changes. The pipeline starts, and the build job ends up on another worker. In that case, Docker will build the image from scratch! As a result, the assembly speed drops dramatically. Not what you expected, right?
In werf, the build process is different. It is defined in the werf.yaml
configuration file and is broken down into stages with precise functions and purposes. Each stage corresponds to an intermediate image (like layers in Docker). However, werf pushes it to the registry and thus renders it available to all workers. The resulting image corresponds to the last stage for a particular Git state and a particular werf.yaml
configLet’son.
For more information about this and other features, see our article comparing Docker with werf.
Usage
Let’s take a look at the test application. There is a Dockerfile that werf uses to build an image. Since only one stage (the last one) will be published as a result of building based on the Dockerfile configuration, you will not get any speed advantage by using multiple workers. In this case, use the alternative Stapel syntax instead of Dockerfile and rewrite the instructions in werf.yaml
.
project: werf-guide-app # Project name.
configVersion: 1 # The config version to use (currently on version 1 is supported).
---
image: builder # The name of the image to build.
from: node:12-alpine # The base image.
git: # The section with directives for adding source files from the git repository.
- add: / # The source path in the repository.
to: /app # The destination path in the image.
excludePaths: # A set of paths or masks used to exclude files and directories from the image.
- local
- .helm
- Dockerfile
stageDependencies: # Conditions for restarting the assembly instructions if certain files in the repository are changed.
install: # For the Install stage.
- package.json
- package-lock.json
setup: # For the Setup stage.
- "**/*"
shell: # Assembly instructions for shell.
install: # For the Install stage.
- cd /app
- npm ci
setup: # For the Setup stage.
- cd /app
- npm run build
---
image: backend
from: node:12-alpine
docker: # A set of directives for the final image manifest (aka Docker).
WORKDIR: /app
git:
- add: /
to: /app
includePaths:
- package.json
- package-lock.json
stageDependencies:
install:
- package.json
- package-lock.json
shell:
beforeInstall:
- apk update
- apk add -U mysql-client
install:
- cd /app
- npm ci --production
setup:
- mkdir -p /app/dist
import: # Import from images (and artifacts).
- image: builder # The name of the image to import from.
add: /app # The absolute path to the file or directory in the image from which the import is performed.
to: /app # The absolute path in the target image.
after: setup # The stage for importing files when building.
---
image: frontend
from: nginx:stable-alpine
docker:
WORKDIR: /www
git:
- add: /.werf/nginx.conf
to: /etc/nginx/nginx.conf
import:
- image: builder
add: /app/dist
to: /www/static
after: setup
---
image: mysql
from: mysql:5.7
A detailed description of all possible directives can be found in the werf documentation.
Dockerfile will do just fine for local development. However, we recommend switching to the alternative werf syntax. This renders the process more flexible and, most importantly, effectively caches all the components, significantly reducing the time required for incremental builds.
Organizing Helm charts
In order to deploy a project to Kubernetes, you have to break it down into strictly declarative objects that Kubernetes can interact with. In general, there may be a lot of such objects, but in your case, you will only need a few of them: Deployment, StatefulSet, Service, Job, Secret, and Ingress. Each object has a specific role. You can learn more about them in the Kubernetes and API documentation.
Helm, a package manager and templating tool, provides more flexibility in configuring, installing, and updating applications in Kubernetes. With Helm, you can describe your application as a chart (it can contain one or more subcharts) with its own parameters and configurLet’s templates. A project’sh its own specific parameters and settings installed in Kubernetes is called a release.
Let’s organize our project’s Helm charts so that the main application components have their own subchart, while the infrastructure component descriptions are stored in separate subcharts.
In our case, there is only one infrastructure component (MySQL), but there can be more, such as Redis, RabbitMQ, PostgreSQL, Kafka, etc.
.helm/
├── Chart.lock
├── charts
│ ├── app
│ │ ├── Chart.yaml
│ │ └── templates
│ │ ├── deployment.yaml
│ │ ├── _envs_app.tpl
│ │ ├── ingress.yaml
│ │ ├── job-db-setup-and-migrate.yaml
│ │ ├── secret.yaml
│ │ └── service.yaml
│ └── mysql
│ ├── Chart.yaml
│ └── templates
│ ├── _envs_database.tpl
│ ├── mysql.yaml
│ ├── secret.yaml
│ └── service.yaml
├── Chart.yaml
├── secret-values.yaml
└── values.yaml
5 directories, 16 files
As you can see, there are two values
files in the root directory. Those who have used Helm before probably know what the values.yaml
file is for. However, you may wonder what secret-values.yaml
is for. The use of environment variables and third-party solutions (Vault, Consul, etc.) contradicts the principle of Giterminism (which underlies werf), according to which Git is the only source of truth. For that reason, we made it possible to store encrypted secrets in the Git repository.
werf can render the subcharts of application components and populate their parameters depending on the environment let’sich the Helm release is being deployed. The same Go templating engine is used for this as the one in Helm. Now let’s look at how the .helm/charts/app/templates/_envs_app.tpl
template handles the environment variables passed to the app Deployment manifest.
- name: MYSQL_HOST
value: "{{ pluck .Values.global.env .Values.envs.MYSQL_HOST | first | default .Values.envs.MYSQL_HOST._default }}"
Here, werf inserts the value of .Values.envs.MYSQL_HOST
into the value
field which corresponds to the .Values.global.env
variable.
Example:
Consider the following snippet of the values.yaml
global file:
...
app:
envs:
MYSQL_HOST:
_default: mysql
local: mysql2
production: mysql3
...
Here, the MYSQL_HOST
variable is a so-called string-keyed map – an object that maps keys with simpler data types. In this case, it is a string type, but you can use more complex structures. For example, the envs
map-object contains the MYSQL_HOST
map-object, etc.
Suppose that the following expression is used in the manifest:
- name: MYSQL_HOST
value: "{{ pluck .Values.global.env .Values.envs.MYSQL_HOST | first | default .Values.envs.MYSQL_HOST._default }}"
In this case, if you render it with the werf render --env local --set app.enabled=true
command, you will have a ready-to-use snippet:
...
- name: MYSQL_HOST
value: mysql
...
Note:
When an entire chart is rendered, a map object with its name corresponding to the name of the subchart is passed to the subchart from the Values
global context. Note that the _envs_app.tpl
file is in the app
subchart with no local values.yaml
file, and it references the .Values.envs.MYSQL_HOST
variable, which is not in the global Values
context. At the same time, the global Values
contain the .Values.app.envs.MYSQL_HOST
variable. While rendering, werf will insert the global .Values.app
map object to the local .Values
of the app
subchart.
This way, you can model and deploy charts to different environments with different parameters without creating multiple values.yaml
files. This will prove very handy.
If your project contains secret-values.yaml
along with values.yaml
, you will need to set the WERF_SECRET_KEY
environment variable to decrypt and render the manifest (see the documentation for details). After decryption, werf will merge keys with the same name into a single key. For example, say you have the following snippet in values.yaml
:
...
app:
envs:
MYSQL_HOST:
_default: mysql
local: mysql2
production: mysql3
MYSQL_PASSWORD:
local: Qwerty123
...
… and this one in secret-values.yaml
:
...
app:
envs:
MYSQL_PASSWORD:
_default: 100037f97cb2629e2cab648cabee0b33d2fe381d83a522e1ff2e5596614d50d3a055
production: 1000b278b1a888b2f03aba0d21ad328007ab7a63199cb0f4d1d939250a3f6ab9d77d
...
It means werf will decrypt secret-values.yaml
and merge the keys into values.yaml
. In this case, the contents of Values
will be as follows:
...
app:
envs:
MYSQL_HOST:
_default: mysql
local: mysql2
production: mysql3
MYSQL_PASSWORD:
local: Qwerty123
_default: Qwerty456
production: NoQwerty321
...
Creating a local development environment
Here are the guidelines for organizing a Git repository with werf:
- You only need to store
WERF_SECRET_KEY
in the project environment variables. All other secret variables can be put intosecret-values.yaml
and encrypted with werf; - Define the name of the environment in which the project instance is to be run. We will use an environment called
local
(with the same namespace in Kubernetes); - All variables used in the charts/subcharts for the
local
environment must be put invalues.yaml
in plain text (no encryption). This wayLet’s can avoid giving access toWERF_SECRET_KEY
to unauthorized developers, which preventssecret-values
from being decrypted and compromised.
Let’s take a look at running an example applicawe’llprepared for the deployment following this article.
Clone the repository and go to the project directory:
git clone https://github.com/flant/examples
cd examples/2022/01-werf-local-dev
Now, we’ll prepare the environment.
First and foremost, you need to allow Docker to pull images via HTTP:
echo \
'{
"insecure-registries": ["registry.local.dev"]
}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
Start minikube in the Docker container and allow it to connect to the registry over HTTP:
minikube start --driver=docker --insecure-registry="registry.local.dev"
macOS note: See the Docker daemon settings for macOS below:
The command to start minikube on macOS will look like this:
minikube start --driver='hyperkit' --insecure-registry="registry.local.dev"
Enable the Ingress addon:
minikube addons enable ingress
Enable the registry addon:
minikube addons enable registry
Add the IP address of the Ingress to /etc/hosts
in the minikube container:
minikube ssh -- "echo $(minikube ip) registry.local.dev | sudo tee -a /etc/hosts"
Add the Ingress IP addresses of the registry and app to /etc/hosts
on localhost
:
echo "$(minikube ip) registry.local.dev" | sudo tee -a /etc/hosts
echo "$(minikube ip) test.application.local" | sudo tee -a /etc/hosts
Create an Ingress for the local registry:
kubectl create -f - <
Allow werf to connect to the registry over HTTP by setting the WERF_INSECURE_REGISTRY
environment variable:
export WERF_INSECURE_REGISTRY=1
Activate werf:
source $(trdl use werf 1.2 ea)
Now, build and deploy the infrastructure components:
werf converge --repo registry.local.dev/app --release infra --env local --namespace local --dev --set mysql.enabled=true --ignore-secret-key=true
Check the result:
kubectl -n local get pod
NAME READY STATUS RESTARTS AGE
mysql-0 1/1 Running 0 39s
Make sure that the database Pod is up and running.
Build and deploy the main application:
werf converge --repo registry.local.dev/app --release app --env local --namespace local --dev --set app.ci_url=test.application.local --set app.enabled=true --ignore-secret-key=true
Check the result:
kubectl -n local get pod
NAME READY STATUS RESTARTS AGE
app-c7958d64d-5tmqp 2/2 Running 0 66s
mysql-0 1/1 Running 0 4m53s
setup-and-migrate-db-rev1--1-xtdsc 0/1 Completed 0 66s
The migration Pod has done its job, as evidenced by the Completed status. The Pod with the main application is up and running.
Now, check if the application is available on the host that we specified above:
kubectl -n local get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
app test.application.local localhost 80 1m
Open your browser and go to http://test.application.local. You should see the application start page.
In werf, there are two modes for dealing with code changes:
- In dev mode, you can loosen Giterminism restrictions and work with non-committed changes. You can enable this mode using the
--dev
option orWERF_DEV
environment variable. - In follow mode, werf automatically re-runs the command in response to the Git state changes. You can enable it using the
--follow
parameter orWERF_FOLLOW
environment variable. The command is re-run when new commits emerge. If used together with the--dev
mode, the command is re-run for any changes.
Thus, you can use the following command to build and deploy the main application in follow mode:
werf converge --repo registry.local.dev/app --release app --env local --namespace local --dev --follow --set app.ci_url=test.application.local --set app.enabled=true --ignore-secret-key=true
In this case, werf will be running in the terminal and watching for repository changes. Try making some changes to the code and see how werf responds to them. Your application will be automatically redeployed in Kubernetes when you commit changes to your Git repo.
Takeaways
This is how you get a local development environment with your application run in Kubernetes. This application (and the K8s infrastructure it needs to run) is fully defined in a Git repo. When you commit changes to this repo, the app and its infrastructure are automatically updated in Kubernetes thanks to werf capabilities.
As you can see, our quest for an optimal balance of cost and efficiency has led to an exciting experience. This topic proved to be quite challenging: you need to have a basic knowledge of Kubernetes and Helm charting to take full advantage of this practice. I hope you enjoyed reading this article as much as I enjoyed writing it.