Linux: /etc/exports add a line /home/vogler *(rw,no_subtree_check,insecure) exports -a In this example we do it very insecure...
Windows: mount -o anon <hostname>:/home/vogler z:
Linux: /etc/exports add a line /home/vogler *(rw,no_subtree_check,insecure) exports -a In this example we do it very insecure...
Windows: mount -o anon <hostname>:/home/vogler z:
Lot of times my ssh session get broken because I didn’t do anything for a while. Sometimes I have started “top” just that the connection does not get broken because of inactivity. But this is not really what I wanna do everytime. Luckily the SSH client can be configured to send alive telegrams for every session so that you do not need to pass arguments every time you open a SSH conneciton.
Following settings will make the SSH client to send alive telegrams to the other side every 60 seconds, and give up if it doesn’t receive any response after 2 tries.
~/.ssh/config Host * ServerAliveInterval 60 ServerAliveCountMax 2
Initially you have to init the certbot and get the certificate manually.
# Directories used:
/var/www
/var/www/certbot # handshake sites from certbot
/etc/letsencrypt # certificates are stored here
# Initialize Certbot:
docker run --rm -ti \
-v /var/www:/var/www \
-v /etc/letsencrypt:/etc/letsencrypt \
certbot/certbot certonly --webroot -w /var/www/certbot -d <yor-domain-name> --email your.email@something.com
The letsencrypt and the www directory must be mounted on both containers. Certbot will check the certificates every 12h and nginx must reload the configuration periodically.
nginx:
image: nginx:1.17.8
ports:
- 80:80
- 443:443
volumes:
- /var/www:/var/www
- /etc/nginx.conf:/etc/nginx/nginx.conf
- /etc/letsencrypt:/etc/letsencrypt
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- /var/www:/var/www
- /etc/letsencrypt:/etc/letsencrypt
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot -w /var/www/certbot; sleep 12h & wait $${1}; done;'"
Nginx must be configured to publish the certbots well-known sites for the handshake and your sites must be configured to use the certificates from letsencrypt.
server {
listen 80;
server_name <your-domain-name>;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name vcm.winccoa.at;
ssl_certificate /etc/letsencrypt/live/<your-domain-name>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<your-domain-name>/privkey.pem;
root /var/www;
index index.html;
location / {
try_files $uri $uri/ =404;
}
If you have multiple services running on Docker with different ports, you have to open ports in your firewall and you have to access the services via different ports in the browser. To have one access port (port 80 or 443) you can use a reverse proxy.
In our case we used NGINX to redirect the access to Kibana (Elasticsearch Dashboard Tool) and PgAdmin4 (PostgreSQL Admin Tool) so that we can access both services on the same port (80) in the browser with different base paths: http://localhost/kibana and http://localhost/pgadmin.
version: '3.0'
services:
elasticsearch:
hostname: elasticsearch
image: elasticsearch:7.5.0
ports:
- 9200:9200
- 9300:9300
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
kibana:
hostname: kibana
image: kibana:7.5.0
depends_on:
- elasticsearch
environment:
- XPACK_MONITORING_ENABLED=false
- LOGGING_QUIET=true
- SERVER_BASEPATH=/kibana
- SERVER_REWRITEBASEPATH=true
postgres:
hostname: postgres
image: postgres:12.1
ports:
- 5432:5432
volumes:
- postgresdb:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=manager
pgadmin:
hostname: pgadmin
image: dpage/pgadmin4
volumes:
- pgadmin:/var/lib/pgadmin
environment:
- PGADMIN_DEFAULT_EMAIL=postgres
- PGADMIN_DEFAULT_PASSWORD=manager
- GUNICORN_ACCESS_LOGFILE=/dev/null
proxy:
hostname: proxy
image: nginx:1.17.8
ports:
- 80:80
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- kibana
- pgadmin
volumes:
esdata:
postgresdb:
pgadmin:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/null;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
root /var/www;
index index.html;
location / {
try_files $uri $uri/ =404;
}
location /pgadmin {
proxy_pass http://pgadmin/;
proxy_http_version 1.1;
proxy_set_header X-Script-Name /pgadmin;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /kibana {
proxy_pass http://kibana:5601/kibana;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_cloud_metadata: ~
setup.ilm:
enabled: false
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
username: '${ELASTICSEARCH_USERNAME:}'
password: '${ELASTICSEARCH_PASSWORD:}'
version: '3.0'
services:
elasticsearch:
hostname: elasticsearch
image: elasticsearch:7.5.0
ports:
- 9200:9200
- 9300:9300
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
kibana:
hostname: kibana
image: kibana:7.5.0
ports:
- 5601:5601
depends_on:
- elasticsearch
environment:
- XPACK_MONITORING_ENABLED=false
- LOGGING_QUIET=true
filebeat:
user: root
hostname: filebeat
image: docker.elastic.co/beats/filebeat:7.5.1
command: filebeat -e -strict.perms=false
volumes:
- ./filebeat.yaml:/usr/share/filebeat/filebeat.yml
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- output.elasticsearch.hosts=["elasticsearch:9200"]
depends_on:
- elasticsearch
volumes:
esdata:
docker-compuse up -d
Then you can access the logs via Kibana in the browser: http://localhost:5601/
When watching (tail -f) log files sometimes I do not want to see line breaks just because the log line is too long. The following command should disable line wrapping in a gnome-terminal:
> tput rmam
apt-get install build-essential
apt-get install linux-headers-$(uname -r)
aftwards install the VMWare tools…
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install epel-release-6-8.noarch.rpm
yum install mono-core
when you get the following message while building the vmware tools:
Searching for a valid kernel header path…
The path “” is not a valid path to the 3.8.13-16.3.1.el6uek.x86_64 kernel
headers.
Would you like to change it? [yes]
you first have to install the kernel-devel and kernel-header package:
> yum install kernel-uek-devel kernel-uek-headers
but there is still a version-file missing… a link can help:
ln -s /usr/src/kernels/3.8.13-16.3.1.el6uek.x86_64/include/generated/uapi/linux/version.h /usr/src/kernels/3.8.13-16.3.1.el6uek.x86_64/include/linux/version.h
I wanted to share a directory (e.g. /share) but it didn’t work, only the users home directory worked well. The reason was “AppArmor” (you find it in the YaST Control Center), disable or configure it well, then it works.