11 September 2020
Andrey Klimentyev, solution engineer

Go? Bash! Meet the shell-operator

In this article, we will present our approach to simplifying the process of making Kubernetes operators and show how you can easily implement your own operator using shell-operator. This text is based on our recent presentation during KubeCon Europe 2020.

Here is the the full video from this talk:

… as well as its slides. However, if you prefer a shorter text summary — please enjoy below!

We at Flant love to improve and automate everything. Today, we are going to talk about one intriguing and exciting concept. Please welcome: cloud-native shell scripting!

But let us start from the environment where all this craziness might happen — Kubernetes.

Kubernetes API and controllers

You can think of the Kubernetes API as a file server containing folders for each kind of object. These objects (resources) are represented by YAML files on that server. The server has a basic HTTP API that allows us to do three things with these objects. We can:

  • get a resource by its kind and name;
  • change the resource (note that the server stores valid objects only — it discards/ignores invalid ones and those meant to be placed in other “directories”);
  • watch the resource (in this case, the user instantly gets the current/updated version of the resource).

In other words, you can think of Kubernetes as basically a YAML file server that has three generic methods (yes, there are others, but we will skip them for now).

However, the server itself can only store information. To put it to work, we need a controller — the second most important and fundamental thing in Kubernetes.

Generally, there are two types of controllers. The first type reads information from Kubernetes, processes it using some logic, and then writes it back to Kubernetes. The second type also reads data from Kubernetes, but, unlike the first type, it changes the state of some external resources.

Let’s take a look at what happens when a user creates a Kubernetes deployment:

  • The Deployment Controller (a part of the kube-controller-manager) gets this information and creates a ReplicaSet.
  • The ReplicaSet then uses this information to create two replicas (pods), but these pods are not yet scheduled.
  • The scheduler schedules the pods and updates their YAMLs with node information.
  • Kubelets update the data in the external resource (say, Docker).

Then all the sequence is repeated in reverse order: kubelet checks the containers, calculates the status of the pod, and sends it back. ReplicaSet Controller receives it and updates the status of the replica set. The same thing happens to the Deployment Controller, and the user finally gets the current (updated) status.


It turns out that Kubernetes is all about controllers operating together (and yes, Kubernetes operators are also controllers). “Okay, — you might say, — But being a sysadmin, how can I create a controller effortlessly?” To answer that question, we have introduced a tool — shell-operator — that allows system administrators to make operators using the methods they are used to.

A simple example: Copying Secrets

Let’s take a look at an example…

Suppose we have a Kubernetes cluster. There is a default namespace in it containing some Secret (mysecret). Also, there are other namespaces in the cluster. Several of these namespaces have a specific label attached to them. Our goal is to copy the Secret to the namespaces that have this label attached.

The task is complicated by the fact that new namespaces can emerge in the cluster, and some of them might have this label. On the other hand, if the label is removed, the Secret must be removed as well. The Secret itself can also change: in this case, the new Secret must be propagated to all labeled namespaces. If the Secret is deleted in some namespace by accident, the operator must immediately restore it.

Now that we have formulated the task, it is time to implement it using our shell-operator. But first, we would like to say a few words about what shell-operator is.

How it works

Similarly to other Kubernetes workloads, shell-operator is deployed in a pod. There is a /hooks subdirectory in the pod in which executable files are stored. They can be written in Bash, Python, Ruby, etc. We call these executable files hooks.

Shell-operator subscribes to Kubernetes events and executes these hooks in response to events we are interested in.

But how does shell-operator know when and what hook to execute? Well, it turns out each hook has two phases. During the start, shell-operator runs each hook with a --config argument. Once the configuration phase is over, hooks are executed the “normal” way: in response to events they are attached to. In this case, the hook gets the binding context (the JSON-formatted data; more on that below).

How we implement it using Bash

Now, if we use Bash, we need to implement two functions (by the way the shell_lib library is highly recommended as it considerably simplifies writing hooks in Bash):

  • the first one is intended for the configuring phase and should output the binding context;
  • the second one contains the core logic of the hook.

source /

function __config__() {
  cat << EOF
    configVersion: v1

function __main__() {

hook::run "$@"

The next step is to decide what objects we are interested in. In our case, we need to track:

  • “source” Secret for changes;
  • all namespaces in the cluster to see which ones have the label;
  • “destination” Secrets to verify if they are synced to the source Secret.

Subscribing to the source Secret

The binding configuration for it is pretty straightforward. We specify that we are interested in mysecret Secrets in the default namespace.

function __config__() {
  cat << EOF
    configVersion: v1
    - name: src_secret
      apiVersion: v1
      kind: Secret
        - mysecret
          matchNames: ["default"]
      group: main

As a result, the hook will be executed in response to changes in the source Secret (src_secret). It would get the following binding context:

As you can see, this binding context has its name and the full object.

Processing namespaces

Now we have to subscribe to namespaces. Here is the needed binding configuration:

- name: namespaces
  group: main
  apiVersion: v1
  kind: Namespace
  jqFilter: |
      hasLabel: (
       .metadata.labels // {} |   
         contains({"secret": "yes"})
  group: main
  keepFullObjectsInMemory: false

As you can see, there is a new field in the configuration called jqFilter. As its name suggests, the jqFilter filters out all the unnecessary information and delivers a new JSON object containing fields that are of interest to us. The hook configured in such a way would receive the following binding context:

It consists of an array of filterResults for each namespace in the cluster. The boolean variable hasLabel shows if the related namespace has the mysecret label. The keepFullObjectsInMemory: false selector deletes full objects in the memory.

Tracking destination Secrets

We subscribe to all Secrets that have the managed-secret: “yes” annotation defined (these are our dst_secrets):

- name: dst_secrets
  apiVersion: v1
  kind: Secret
      managed-secret: "yes"
  jqFilter: |
  group: main
  keepFullObjectsInMemory: false

In this case, jqFilter filters out all information except for the namespace name and the resourceVersion parameter. We passed this parameter to an annotation when we created this destination Secret. It allows us to compare Secrets (and keep them up-to-date).

The hook configured in such a way would get three binding contexts described above when executed. You can think of them as some kind of snapshot of the cluster.

We can devise a basic algorithm using all this information. It iterates over all namespaces and if hasLabel is true for the current namespace:

  • compares source and destination Secrets:
  • if they are the same — does nothing;
  • if they are different — does kubectl replace or create.

If hasLabel is false for the current namespace, it:

  • makes sure that no Secret is present in the namespace:
  • if the destination Secret exists — does kubectl delete;
  • if the destination Secret does not exist — does nothing.

The full Bash implementation of the above algorithm is available here, in our examples repository.

A simple Kubernetes controller is made in 35 lines of YAML and the same amount of Bash! And the shell-operator’s job is to bind them all together.

Obviously, copying Secrets is not the only thing you can do with shell-operator. We’re going to show a few more examples to see how useful it can be in your routines.

Example 1: Updating ConfigMap

Let us consider the deployment with three pods. These pods use a ConfigMap to store some configuration. When these pods were starting, the ConfigMap was in some state (we will call it Version 1, v.1). Thus, all our pods have the same v.1 version of the ConfigMap.

Now let’s suppose that the ConfigMap changes to another version (v.2). In this case, our pods would still be using the previous, v.1, version of the ConfigMap.

What do we usually do in such cases? Yes, we add something to our pods’ template. So let’s add the checksum annotation to the template section of the Deployment definition:

Now, all our pods have the checksum, and it is the same as the checksum of the Deployment. Next, we should update the annotation in response to ConfigMap changes. And that is when shell-operator might come in handy. All we need is to program a hook that would subscribe to the ConfigMap and update the checksum.

When a user modifies the ConfigMap, shell-operator notices the change and updates the checksum. And then the Kubernetes auto-magic happens: Kubernetes kills the pod, creates a new one, waits until it is ready, and proceeds to the next one. Thus, our Deployment becomes perfectly in sync and running along with the updated ConfigMap.

Example 2: Working with Custom Resource Definitions

As you know, Kubernetes allows us to create custom kinds of objects. For example, we can create a kind called MysqlDatabase. Let’s say, this kind has only two metadata parameters: name and namespace.

kind: MysqlDatabase
  name: foo
  namespace: bar

So, we have a Kubernetes cluster with various namespaces in which we can create MySQL databases. In this case, shell-operator can be used to watch for resources of the MysqlDatabase kind, connect them to the MySQL database server, and synchronize the desired and the observed state.

Example 3: Monitoring the cluster network

As you know, pinging is the easiest way to monitor the network. Here is how you can implement it using shell-operator.

First of all, we have to subscribe to nodes. Shell-operator needs a name and an IP address of each node to cycle through the list of nodes and ping every one of them.

configVersion: v1
- name: nodes
  apiVersion: v1
  kind: Node
  jqFilter: |
      ip: (
       .status.addresses[] |   
        select(.type == "InternalIP") |
  group: main
  keepFullObjectsInMemory: false
  executeHookOnEvent: []
- name: every_minute
  group: main
  crontab: "* * * * *"

The executeHookOnEvent: [] parameter prevents the invocation of the hook in response to any event whatsoever (the hook will not be executed when the nodes are changed, added, or deleted). However, it will be run (and update the list of nodes) every minute as per the schedule field.

How do we identify problems like packet loss? Let’s take a look at the code:

function __main__() {
  for i in $(seq 0 "$(context::jq -r '(.snapshots.nodes | length) - 1')"); do
    node_name="$(context::jq -r '.snapshots.nodes['"$i"']')"
    node_ip="$(context::jq -r '.snapshots.nodes['"$i"'].filterResult.ip')"
    if ! ping -c 1 "$node_ip" -t 1 ; then
    cat >> "$METRICS_PATH" <<END
        "name": "node_packets_lost",
        "add": $packets_lost,
        "labels": {
          "node": "$node_name"

We cycle through the list of nodes, get the node name and IP address, ping the node, and write the result to the Prometheus metrics endpoint. Shell-operator can export metrics to Prometheus by writing them to the file stored at the path specified in the $METRICS_PATH environment variable.

So, this is how you can implement basic network monitoring in the cluster with minimum coding.

Queuing mechanism

This article would be incomplete without discussing the queuing mechanism essential to shell-operator. Imagine that shell-operator executes a hook in response to some event in the cluster.

  • What would happen if another event occurs in the cluster?
  • Will shell-operator run another instance of the hook?
  • What if, say, five events take place in the cluster simultaneously?
  • Will shell-operator run all of them in parallel?
  • And what about resources consumed, such as memory and CPU?

Fortunately, shell-operator has a built-in queuing mechanism. All events are put into the queue and processed sequentially.

Suppose we have two hooks. The first event goes to the first hook. After the processing is complete, the queue advances. Three next events are for another hook and they are popped out of the queue and passed to the hook as a batch. Thus, the hook receives the array of events — the array of binding contexts, to be more precise.

Another option is to combine these events into a larger event. The binding configuration’s group parameter is responsible for this.

Furthermore, you can have as many queues/hooks and their combinations as you like. For example, you can use one queue with two hooks, or vice versa.

All you have to do is to insert the queue field into the binding configuration. If the queue name is omitted, the hook is run in the default queue. Such a queuing mechanism addresses all the resource management problems in their entirety.


In this article, we explained what shell-operator is, showed how to create Kubernetes operators with it quickly and effortlessly, and provided several thought-provoking examples of its use.

The detailed information about our tool, as well as a quick-start guide, is available in its GitHub repository. Feel free to contact us (@flant_com) or star our projects on GitHub!

By the way, take a look at our other projects — you might find them useful. For example, addon-operator is an older brother of shell-operator. It can bundle Helm charts with it, upgrade them, monitor various chart parameters/values (as well as control the installation of Helm charts), and change them in response to cluster events.


This article has been originally posted on Medium. New texts from our engineers are placed here, on Please follow our Twitter or subscribe below to get last updates!