SSH Keep Alive

Lot of times my ssh session get broken because I didn’t do anything for a while. Sometimes I have started “top” just that the connection does not get broken because of inactivity. But this is not really what I wanna do everytime. Luckily the SSH client can be configured to send alive telegrams for every session so that you do not need to pass arguments every time you open a SSH conneciton.

Following settings will make the SSH client to send alive telegrams to the other side every 60 seconds, and give up if it doesn’t receive any response after 2 tries.

Host *
    ServerAliveInterval 60
    ServerAliveCountMax 2

Nginx & Certbot (Letsencrypt) via Docker…

Initially you have to init the certbot and get the certificate manually.

# Directories used:
/var/www/certbot # handshake sites from certbot
/etc/letsencrypt # certificates are stored here
# Initialize Certbot:
docker run --rm -ti \
  -v /var/www:/var/www \
  -v /etc/letsencrypt:/etc/letsencrypt \
certbot/certbot certonly --webroot -w /var/www/certbot -d <yor-domain-name> --email 

The letsencrypt and the www directory must be mounted on both containers. Certbot will check the certificates every 12h and nginx must reload the configuration periodically.

    image: nginx:1.17.8
      - 80:80
      - 443:443
      - /var/www:/var/www
      - /etc/nginx.conf:/etc/nginx/nginx.conf
      - /etc/letsencrypt:/etc/letsencrypt
    command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"

    image: certbot/certbot
    restart: unless-stopped
      - /var/www:/var/www
      - /etc/letsencrypt:/etc/letsencrypt
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot -w /var/www/certbot; sleep 12h & wait $${1}; done;'"

Nginx must be configured to publish the certbots well-known sites for the handshake and your sites must be configured to use the certificates from letsencrypt.

    server {
        listen 80;
        server_name <your-domain-name>;
        server_tokens off;
        location /.well-known/acme-challenge/ {
            root /var/www/certbot;

        location / {
            return 301 https://$host$request_uri;

    server {
        listen 443 ssl;

        ssl_certificate     /etc/letsencrypt/live/<your-domain-name>/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/<your-domain-name>/privkey.pem;

        root /var/www;
        index index.html;

        location / {
            try_files $uri $uri/ =404;

WinCC OA on Docker, Dockerfiles and Howto’s…

This repository on Github contains Dockerfiles and samples to build Docker images for WinCC OA products.

Build Docker Image

Download and unzip the CentOS WinCC OA rpm’s to the centos/software directory.

Only put those WinCC OA rpm’s into the directory which you want to have installed in your image. For a minimum image you only need the base packag of WinCC OA.


Build your WinCC OA Docker image with:

docker build -t winccoa:3.16 .

WinCC OA Project in a Container

The project should be mounted on /proj/start as a volume to your docker container.

And you may also mount a shield file to your docker container.

Example how to startup a WinCC OA project in a container:

docker run -d  
  --name winccoa  
  --hostname winccoa-server  
  -v ~/shield.txt:/opt/WinCC_OA/3.16/shield.txt  
  -v /proj/DemoApplication_3.16:/proj/start  
  -p 5678:5678  

WinCC OA Gedi in a Container

To start a WinCC OA client application like a Gedi or a User-Interface you have to adapt your config file so that the proxy settings point to the WinCC OA server container. You can just create a copy of your config file (e.g. config.ui) and adapt the settings.

data = "winccoa-server" 
event = "winccoa-server" 
mxProxy = "winccoa-server <your-docker-host-name>:5678 cert" 

Then you can startup a Gedi/Ui with:

docker run --rm  
-v /tmp/.X11-unix:/tmp/.X11-unix  
-v /proj/DemoApplication_3.16:/proj/default  
-v /proj/DemoApplication_3.16/config/config.ui:/proj/default/config/config  
WCCOAui -autoreg -m gedi -proj default 

Sure you can also use a copy of your project directory (or a git checkout if you use git) and adapt the config file.

Start Project Administration as Container

With the Project Administration you can create a new project in the /proj directory.

docker run -ti --rm 
-v /tmp/.X11-unix:/tmp/.X11-unix 
-v /proj:/proj 
WCCOAui -projAdmin

Distributed Managers and Kubernetes

For sure what we have done with the Gedi can also be done with Control-Managers and Drivers. And in theory that can also be done with Kubernetes and so you can run your SCADA project in a Kubernetes Cluster.

Use GraphQL in WinCC OA …

This is a simple example how to query a GraphQL server from WinCC OA ctrl via HTTP.

  string url = "";

  string query = "query($tag: String!){getTag(name: $tag){tag{current{value}}}}";

  mapping variables = makeMapping("tag", "Input");

  mapping content = makeMapping("query", query, "variables", variables);

  mapping data = makeMapping(
      "headers", makeMapping("Content-Type", "application/json"),
      "content", jsonEncode(content)

  mapping result;

  netPost(url, data, result);

  if (result["httpStatusText"]=="OK") {
  else {
    return "Error";


   "data": {
     "getTag": {
       "tag": {
         "current": {
           "value": 280.87696028711866

Grafana behind Nginx Reverse Proxy…

The subdomain is redirected by my provider to my IP at home where Nginx (with Letsencrypt Certificate) is running and it forwards /grafana to my Grafana Docker Instance. Access to Grafana is possible via

Nginx Configuration (sites-enabled/default)

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/html;

        index index.html index.htm index.nginx-debian.html;


        location / {
                try_files $uri $uri/ =404;

        location /grafana/ {
            proxy_pass http://docker1:3000/;

Grafana Configuration (/etc/grafana/grafana.ini)

# Protocol (http or https)
protocol = http

# The http port to use
http_port = 3000

# The public facing domain name used to access grafana from a browser
domain =

# Root Url (NOTE: there is not Port in the URL)
root_url = %(protocol)s://%(domain)s/grafana

enforce_domain = false

Backup and Restore PostgreSQL Container Database …

Backup a Database

[root@avcentos ~]# cat
cn=postgres # set container name
docker exec -t $cn pg_dump -c -U postgres $db > dump_`date +%d-%m-%Y"-"%H%M`.sql

Restore a Database

[root@avcentos ~]# cat
if [ ! -f "$1" ]
  echo File does not exist.
  cn=postgres # set container name
  echo Restore to $db...
  cat $1 | docker exec -i $cn psql -U postgres -d $db

Drop a Database

# First kill connected sessions
SELECT pg_terminate_backend(
 FROM pg_stat_activity
 WHERE pg_stat_activity.datname = 'mydb';

# Drop your database

Install Kubernetes-Cluster

Install Servers

I have used Ubuntu Server 18.04 LTS.

Install one Master-Node and 3 Worker-Nodes.

! Don’t clone the Ubuntu-VMs. I had troubles with networking when I have used cloned VMs. Even though the Mac-Addresses of the interfaces were different there were troubles with networking in Kubernetes.

! Each node needs to have internet access! Because they will pull the docker images….

! You may setup your Master-Node also as Docker-Registry, so that the Nodes can pull images (self made images) from the Master.

Setup Network

We use 10.0.15.x as cluster network on a host-only vm network. In that case we set two IPs. The 192.168.163.x is the vm network, so that the VMs are accesible from your host (where the VMs are hosted on). The 10.0.15.x is the internal cluster network. Additionally we have a second interface with DHCP enabled, this interface should get a network in your public network with internet connection.

vi /etc/netplan/50-cloud-init.yaml

            dhcp4: true
            optional: true
            dhcp4: false
            addresses: [,]
> netplan apply
> vi /etc/hosts  master  worker01  worker02  worker02
> hostnamectl set-hostname master  
> reboot

Install Docker

> apt install -y  

Systemd must be used for cgroupdriver in docker

> cat > /etc/docker/daemon.json <<EOF
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  "storage-driver": "overlay2",
  "insecure-registries" : ["master:5000"]   
> mkdir -p /etc/systemd/system/docker.service.d
> systemctl daemon-reload
> systemctl restart docker

Install Docker Registry on Master

> docker run -d -p 5000:5000 --restart=always --name registry registry:2

add your Registry server as insecure registry to /etc/docker/daemon.json, if you haven’t it done in one of the previous steps.

  "insecure-registries" : ["master:5000"]   

push an image to your Registry server:

> docker tag <image> master:5000/<image>   # tag your image
> docker push master:5000/<image>  # push your image

Install Kubernetes

Disable Swap

> swapon -s  
> swapoff -a  
> vim /etc/fstab  # comment out line with swap device  
> reboot  
> apt install -y apt-transport-https  
> curl -s | apt-key add -  
> echo "deb kubernetes-xenial main" > /etc/apt/sources.list.d/> kubernetes.list  
> apt update  
> apt install -y kubeadm kubelet kubectl  

Master Node Cluster Initalization

> kubeadm config images pull
> kubeadm init --pod-network-cidr= --apiserver-advertise-address=  

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

> mkdir -p $HOME/.kube
> sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
> sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster (use a regular user)

> kubectl apply -f


> kubectl apply -f "$(kubectl version | base64 | tr -d 'n')"

Check the state

> kubectl get nodes  
> kubectl get pods --all-namespaces

Join Worker Node(s)

Then you can join any number of worker nodes by running the following on each as root at the worker nodes (copy this from the outpout kubeadm init)

> kubeadm join --token w8vr52.wtful961u754ev8b 
    --discovery-token-ca-cert-hash sha256:b07d512632b0117bfe81716b57d0c00b64cabd8222c5ffae04f447291a7c16f8

check if the nodes have been joined:

> kubectl get nodes

Use Local Docker Registry

Start Local Registry

docker run -d -p 5000:5000 --restart=always --name registry -v /data/registry:/var/lib/registry registry:2
cat /etc/docker/daemon.json  
  "insecure-registries" : ["myhost:5000"]   
docker tag <image> myhost:5000/<image>  
docker push myhost:5000/<image>

Load Image via SSH

cat xxx.img | ssh root@worker02 "docker load"

Get Only Image Names

docker images --format "{{.Repository}}"

Setup MicroK8S


Add the following lines to /etc/docker/daemon.json:
    "insecure-registries" : ["localhost:32000"]
> snap install microk8s --classic --channel=1.17/stable  

> microk8s.status  

> microk8s.stop  # Stops all MicroK8s services  
> microk8s.start  # Starts MicroK8s after it is being stopped  


> alias kubectl='microk8s.kubectl'  
> alias k='microk8s.kubectl'  


> k get all  
> k get all --all-namespaces

> k cluster-info

> k get pods -o wide


> microk8s.enable dns dashboard  

Get token for dashboard
> token=$(microk8s.kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)  
> microk8s.kubectl -n kube-system describe secret $token  


> cluster-info # url for grafana  
> microk8s.config # username and password  

Enable Helm

> microk8s.enable helm  
> microk8s.helm init  

Local Images

microk8s.ctr image import myimage.tar
microk8s.ctr images ls

Kibana and PgAdmin4 with NGINX Reverse Proxy on Docker…

If you have multiple services running on Docker with different ports, you have to open ports in your firewall and you have to access the services via different ports in the browser. To have one access port (port 80 or 443) you can use a reverse proxy.

In our case we used NGINX to redirect the access to Kibana (Elasticsearch Dashboard Tool) and PgAdmin4 (PostgreSQL Admin Tool) so that we can access both services on the same port (80) in the browser with different base paths: http://localhost/kibana and http://localhost/pgadmin.


version: '3.0'
    hostname: elasticsearch
    image: elasticsearch:7.5.0
      - 9200:9200
      - 9300:9300
      - esdata:/usr/share/elasticsearch/data
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"  

    hostname: kibana
    image: kibana:7.5.0
      - elasticsearch        
      - LOGGING_QUIET=true
      - SERVER_BASEPATH=/kibana

    hostname: postgres
    image: postgres:12.1
      - 5432:5432
      - postgresdb:/var/lib/postgresql/data
      - POSTGRES_PASSWORD=manager

    hostname: pgadmin
    image: dpage/pgadmin4
      - pgadmin:/var/lib/pgadmin
      - PGADMIN_DEFAULT_EMAIL=postgres

    hostname: proxy
    image: nginx:1.17.8
      - 80:80
      - ./nginx.conf:/etc/nginx/nginx.conf
      - kibana 
      - pgadmin  



user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/;

events {
    worker_connections  1024;

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /dev/null;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    server {
        listen 80;

        root /var/www;
        index index.html;

        location / {
            try_files $uri $uri/ =404;
        location /pgadmin {
            proxy_pass http://pgadmin/;
            proxy_http_version 1.1;
            proxy_set_header X-Script-Name /pgadmin;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;

        location /kibana {
            proxy_pass http://kibana:5601/kibana;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;