Django with Docker: Build an Image

Bootstrap Django

mkdir docker-django && cd docker-django
docker run --rm -it  -v $PWD:/app -p 8000:8000 python:3.6 bash
# --rm         # Remove container after finishing bootstrapping
# -it          # Interactive Mode
# -v $PWD:/app # Mount current directory to /app
# -p 8000:8000 # Expose port 8000 as 8000
cd /app
pip install django
pip freeze > requirements.txt
django-admin startproject apps
sed -i.bak 's/^ALLOWED_HOSTS.*/ALLOWED_HOSTS = ["*"]/' apps/apps/ 
./apps/ runserver

Custom Image

cat  << EOF > Dockerfile
FROM python:3.6-alpine
COPY . /app
RUN pip install -r requirements.txt
CMD ["python", "./apps/", "runserver", ""]

docker build -t my-dj-image .
docker run --name my-dj-container --rm -p 8000:8000 my-dj-image
docker run --name my-dj-container --rm -p 8000:8000 -v $PWD:/app my-dj-image   # You can update your code
docker run --name my-dj-container --rm -p 8000:8000 -v $PWD:/app -d my-dj-image .  # -d Detach
docker exec -it my-dj-container /app/apps/ migrate
docker exec -it my-dj-container /app/apps/ createsuperuser

Go Language Setup for Multiple Projects

When working with Go language you must setup the GOPATH environment variable, but soon you will face two problems:

  • Each project should have its own Go dependencies and its own Git code repo, so putting your source under GOPATH would be problematic.
  • When working with “Atom” with “Go Plus” plugin, it needs to install several Go packages which would pollute your own source.

To solve both problems I added the following to my “.bash_login”:

export GOPATH=$HOME/go
alias atom='PATH=$PATH:$HOME/go/bin GOPATH=$HOME/go:$GOPATH atom'
gpp() {
        export GOPATH=`pwd`

It perform the following:

  • Set a default GOPATH to $HOME/go_sandbox for testing small Go projects
  • Set an alias for “atom” to add an extra dir in GOPATH and PATH, so whatever GO Plus plugin add won’t be add to your current GOPATH
  • Setup a “gpp” function to quickly change GOPATH to current directory

Limit SSH to Copy a Single File Only

I want to allow host-2 to copy a file securely from host-1, so the easiest way is to use “scp” command which use “ssh” as a transport to copy the file.

If you want to do it manually it is straight forward “scp” invocation:

host-2$ scp host-1:data.csv .

But if you want to automate it you have to use “ssh” keys, but this means leaving a private ssh key on host-2 that can access host-1 without any restriction, i.e.

host-2$ ssh host-1  # FULL ACCESS NO PASSWORD NEEDED!!

A better way is to generate a new ssh-key on host-2, like:

host-2$ ssh-keygen
host-2$ ls ~/.ssh/id_rsa*
host-2$ cat
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHBoO5JciwnRKWzbmZiZ68J7Vouim+ZUNvmsXYeCFa6TDGTmG9Wh1KhAAgQDqTuwL9BcgbOM2qiwOlLMREtH6LYLbbp9RIBIGNb0a8UL3Fka++vziHkTgaqPJ2Uq0Qd8J0oZCqseBQqSMlebO4BxOYuRMqEFn7ETR5N+SM/hq5PeuS5SVGnleJOqaO8Cq5AcoIdlYeRXjDIFw9x7DugHKP4uBTr2o+lft7seyHjYOmrWiX0+GFiDsdTzqIMC+Px3pqY8Hcd4DC2lmYDJCDG7Js3zzvzp8Xs6sBEwqZpECh8TmXZxl5/OHt8XtVCJs0lfqiHhQWFIlsYqPg+4AsjiUP

Then add the the key to host-1 authorized_keys file with one small change:

host-1$ vi ~/.ssh/authorized_keys
command="scp -f data.csv" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHBoO5JciwnRKWzbmZiZ68J7Vouim+ZUNvmsXYeCFa6TDGTmG9Wh1KhAAgQDqTuwL9BcgbOM2qiwOlLMREtH6LYLbbp9RIBIGNb0a8UL3Fka++vziHkTgaqPJ2Uq0Qd8J0oZCqseBQqSMlebO4BxOYuRMqEFn7ETR5N+SM/hq5PeuS5SVGnleJOqaO8Cq5AcoIdlYeRXjDIFw9x7DugHKP4uBTr2o+lft7seyHjYOmrWiX0+GFiDsdTzqIMC+Px3pqY8Hcd4DC2lmYDJCDG7Js3zzvzp8Xs6sBEwqZpECh8TmXZxl5/OHt8XtVCJs0lfqiHhQWFIlsYqPg+4AsjiUP

Notice the command part, which limit the given key to a given command.

NOTE: the public key is the same one generated from previous step on host-2

Now if you try to access the machine it will fail.

host-2$ ssh host-1
Connection to host-1 closed.

Even if you try to copy another file it will download the file you specify in the authorized_keys:

host-2$ scp host-1:data.xml .
data.csv    100%

Notice that it downloaded the data.csv and not data.xml!

“sar” command cheat sheet

“sar” is a Unix command that collect, report, or save system activity information, it is different from other system status command like “top” or “vmstat” that only show real time status only, “sar” in the other hand collect these data so you can find the system state at any time.


# Live values: interval count
sar 1 3

# historical values

Previous Days

# Day 11 of current month Ubuntu
sar -f /var/log/sysstat/sa11

# Day 11 of current Month CentOS
sar -f /var/log/sa/sa11

Time Range

# show from 10:00 am to 11:00 am
sar -s 10:00:00 -e 11:00:00

Data Options

sar      # CPU 
sar -r   # RAM
sar -b   # Disk

Mixing options

sar   -b   -s 10:00:00 -e 11:00:00   -f /var/log/sa/sa11  
-b   # disk 
-s   # from 10:00 to 11:00
-f   # day 11



$ sudo yum install sysstat


$ sudo apt-get install sysstat
$ sudo vi /etc/default/sysstat

More info:

Monitoring Servers with Munin

This is a draft on configuring Munin to monitor services on a Linux machine, still dirty but published for my reference, if you have question let me know.

Monitoring Servers

sudo apt-get install munin

sudo htpasswd -c /etc/munin/htpasswd rayed

vi vi /etc/munin/munin-conf.d/example_com_site
    use_node_name yes

sudo vi vi /etc/nginx/sites-enabled/default
server {
location /munin/static/ {
        alias /etc/munin/static/;
        expires modified +1w;

location /munin/ {
        auth_basic            "Restricted";
        auth_basic_user_file  /etc/munin/htpasswd;

        alias /var/cache/munin/www/;
        expires modified +310s;

Monitored Servers

sudo apt-get install munin-node
sudo apt-get install munin-plugins-extra

sudo vi /etc/munin/munin-node.conf
allow ^172\.18\.100\.100$   # monitoring server address


munin-node-configure is really useful command you can use it install all the plugins you need, when you run it will try to test if a plugin can be used or not (–suggest argument), and even give the commands needed to link the plugin automatically (–shell argument)

sudo munin-node-configure --suggest
sudo munin-node-configure --shell

MySQL plugin

sudo apt-get install libcache-perl libcache-cache-perl

sudo munin-node-configure --suggest --shell | sh

sudo munin-run mysql_commands 

sudo service munin-node restart

Memcached plugin

sudo aptitude install libcache-memcached-perl

sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_bytes
sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_counters
sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_rates

sudo munin-run memcached_counters 

sudo service munin-node restart

Nginx Web Server

Configure Nginx to repoer its status under the URL http://localhost/nginx_status, which will be read from Munin Nginx plugin:

sudo vi /etc/nginx/sites-enabled/default
server {
        # STATUS
        location /nginx_status {
                stub_status on;
                access_log off;
                allow ::1;
                allow my_ip;
                deny all;

sudo service nginx restart

Configure Munin:

sudo apt-get install libwww-perl

sudo ln -s '/usr/share/munin/plugins/nginx_request' '/etc/munin/plugins/nginx_request'
sudo ln -s '/usr/share/munin/plugins/nginx_status' '/etc/munin/plugins/nginx_status'

sudo munin-run nginx_request

sudo service munin-node restart


It is better to use munin-node-configure to configure and install postgres plugin, because it will detect installed databases and configure a graph for each.

sudo apt-get install libdbd-pg-perl
sudo munin-node-configure --suggest
sudo sh -c  'munin-node-configure --shell | grep postgres | sh '
sudo service munin-node restart