Rsnapshot on OSX

Rsnapshot is a backup solution for Unix machines including Linux and OSX, it supports many great features including full backups with the size of only incremental backup, it also support backing up local and remote machines.

Check my previous post for more details about it, Backup Journey to rsnapshot

Install Rsnapshot

The easiest way to install unix programs in OSX is to use Homebrew, after installing Homebrew install rsnapshot and needed programs:

$ brew update
$ brew install rsnapshot
$ brew install coreutils

“coreutils” is needed because “cp” command installed with OSX doesn’t support GNU “cp” options needed for rsnapshot.


I will be running rsnapshot from user account and not root user, I will be using my user “rayed”:

$ mkdir /Users/rayed/rsnapshot
$ cat > /Users/rayed/rsnapshot/rsnapshot.conf << EOF
# rsnapshot.conf - rsnapshot configuration file #
#                                               #
#                                               #
# This file requires tabs between elements      #
#                                               #
# Directories require a trailing slash:         #
#   right: /home/                               #
#   wrong: /home                                #
#                                               #
config_version	1.2

snapshot_root	/Users/rayed/rsnapshot/

cmd_cp		/usr/local/bin/gcp
cmd_rm		/bin/rm
cmd_rsync	/usr/bin/rsync
cmd_ssh		/usr/bin/ssh
cmd_logger	/usr/bin/logger

retain		hourly	4
retain		daily	7
retain		weekly	4
retain		monthly	3

verbose		2
loglevel	3
logfile		/Users/rayed/rsnapshot/rsnapshot.log
lockfile	/Users/rayed/

# Backup local machine
backup		/Users/rayed/	localhost/
# Backup remote machine

Important notes about the configuration files:

  • Use TABs not spaces between elements.
  • Directories require a trailing slash, i.e. /home/ not /home
  • On OSX we replaced “cmd_cp” from “/bin/cp” to “/usr/local/bin/gcp” to support rsnapshot options.
  • For remote machine backup, make sure you use ssh keys, Rsnapshot can’t use passwords.

Running rsnapshot

To test your setup try the following command:

rsnapshot -c /Users/rayed/rsnapshot/rsnapshot.conf hourly

When everything works fine, you need to configure “rsnapshot” to run periodically from cron, install the following line:

$ crontab -e
# RSnapshot
0 */6 * * *	/usr/local/bin/rsnapshot -c /Users/rayed/rsnapshot/rsnapshot.conf hourly
30 3 * * *	/usr/local/bin/rsnapshot -c /Users/rayed/rsnapshot/rsnapshot.conf daily
0  3 * * 1	/usr/local/bin/rsnapshot -c /Users/rayed/rsnapshot/rsnapshot.conf weekly
30 2 1 * *	/usr/local/bin/rsnapshot -c /Users/rayed/rsnapshot/rsnapshot.conf monthly

After few hours double check your setup by making reviewing the log file “/Users/rayed/rsnapshot/rsnapshot.log”, and check the rsnapshot root directory for your backups, you should have something like:

$ ls -l /Users/rayed/rsnapshot/

Monitoring Servers with Munin

This is a draft on configuring Munin to monitor services on a Linux machine, still dirty but published for my reference, if you have question let me know.

Monitoring Servers

sudo apt-get install munin

sudo htpasswd -c /etc/munin/htpasswd rayed

vi vi /etc/munin/munin-conf.d/example_com_site
    use_node_name yes

sudo vi vi /etc/nginx/sites-enabled/default
server {
location /munin/static/ {
        alias /etc/munin/static/;
        expires modified +1w;

location /munin/ {
        auth_basic            "Restricted";
        auth_basic_user_file  /etc/munin/htpasswd;

        alias /var/cache/munin/www/;
        expires modified +310s;

Monitored Servers

sudo apt-get install munin-node
sudo apt-get install munin-plugins-extra

sudo vi /etc/munin/munin-node.conf
allow ^172\.18\.100\.100$   # monitoring server address


munin-node-configure is really useful command you can use it install all the plugins you need, when you run it will try to test if a plugin can be used or not (–suggest argument), and even give the commands needed to link the plugin automatically (–shell argument)

sudo munin-node-configure --suggest
sudo munin-node-configure --shell

MySQL plugin

sudo apt-get install libcache-perl libcache-cache-perl

sudo munin-node-configure --suggest --shell | sh

sudo munin-run mysql_commands 

sudo service munin-node restart

Memcached plugin

sudo aptitude install libcache-memcached-perl

sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_bytes
sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_counters
sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_rates

sudo munin-run memcached_counters 

sudo service munin-node restart

Nginx Web Server

Configure Nginx to repoer its status under the URL http://localhost/nginx_status, which will be read from Munin Nginx plugin:

sudo vi /etc/nginx/sites-enabled/default
server {
        # STATUS
        location /nginx_status {
                stub_status on;
                access_log off;
                allow ::1;
                allow my_ip;
                deny all;

sudo service nginx restart

Configure Munin:

sudo apt-get install libwww-perl

sudo ln -s '/usr/share/munin/plugins/nginx_request' '/etc/munin/plugins/nginx_request'
sudo ln -s '/usr/share/munin/plugins/nginx_status' '/etc/munin/plugins/nginx_status'

sudo munin-run nginx_request

sudo service munin-node restart


It is better to use munin-node-configure to configure and install postgres plugin, because it will detect installed databases and configure a graph for each.

sudo apt-get install libdbd-pg-perl
sudo munin-node-configure --suggest
sudo sh -c  'munin-node-configure --shell | grep postgres | sh '
sudo service munin-node restart

Django memory leak with gunicorn

tl;dr add “–max-requests” to Gunicorn to easily solve memory leak problems.

If you have a long running job that leaks few bytes of memory it will eventually will consume all of your memory with time.

Of course you need to find out where is the memory leak and fix it, but sometimes you can’t because it on a code that you use and not your own code.

Apache webserver solve this problem by using “MaxRequestsPerChild” directive, which tells Apache worker process to die after serving a specified number of requests (e.g. 1000), which will free all the memory the process acquired during operation.

I had a similar problem with Django under Gunicorn, my Gunicorn workers memory keep growing and growing, to solve it I used Gunicorn option “–max-requests”, which works the same as Apache’s “MaxRequestsPerChild”:

gunicorn apps.wsgi:application -b --workers 8 --max-requests 1000

Django Themes (or where to put base.html?)

The Wrong Way

I used to create a new directory to hold common templates like “base.html”, and add it TEMPLATES_DIR in the file:

TEMPLATE_DIRS = (os.path.join(BASE_DIR, 'templates') ,)

But in most cases the “base.html” would need to use CSS, JS, and image files to be functional, so I changed the url routing to access them (from the DEBUG mode only), something like:

$ vi apps/
# Serve Static Files 
from django.conf import settings
from django.conf.urls.static import static
if settings.DEBUG:
    urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
    urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
    urlpatterns += patterns('django.views.static',
        url(r'^(?P<path>(js|css|img)/.*)$', 'serve', {'document_root':  settings.BASE_DIR+'/../www'}),

This setup isn’t ideal for many reasons:

  • I had to modify and with complicated settings.
  • Theme design span multiple directories, and it isn’t self contained.
  • Switching the design is complicated, and include many changes.

Django Simple Themes

Nowadays I create a new Django application e.g. “my_theme” to hold my “base.html” template and all needed static files (CSS, JS, Images, etc …).

./ startapp my_theme

Then add it to INSTALLED_APPS:

    # django core apps ...
    # other apps ...

The directory structure for my new app looks like this:


and from my “base.html” (or any other template) I could access the static file using the static tag:

{% load staticfiles %}
<img src="{% static "my_theme/img/logo.png" %}" />

I don’t even need to change the “” file to access the static file, since the development server (i.e. ./ runserver) already knows how to find them.

But for production I have to define:

STATIC_ROOT = os.path.join(BASE_DIR, '../www/static')

and run:

./ collectstatic --noinput

New Theme

By having all theme files inside an application I can start new theme by copying “my_theme” to something like “new_theme” and replace it in the INSTALLED_APPS in the

What about uploaded files?

To access uploaded file from development server you need to define both MEDIA_URL and MEDIA_ROOT and change your “”:

$ vi apps/
from django.conf import settings
from django.conf.urls.static import static
if settings.DEBUG:
    urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)

Source: Serving files uploaded by a user during development

Sample Theme

You can download my sample theme from:

Protect your Server with Fail2Ban

Fail2ban is a program that scan your log files for any malicious behavior, and automatically block the offending IP.

The default Fail2ban installation on Ubuntu will protect ssh, but in this article I will show how to protect against WordPress comment spammers too, to slow them down.

Installation & Configuration

# Install fail2ban
$ sudo apt-get install fail2ban

# Copy default config to custom config
$ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

# Add your own IPs so they never get blocked
$ sudo vi /etc/fail2ban/jail.local
ignoreip =

# restart it
$ sudo service fail2ban restart

Fail2ban is now configured and running.

You can use the following commands to inspect and trouble shoot its operation:

# fail2ban usually add new rules to your IPTables
$ sudo iptables -L

# You can check the status of specific rules using the command:
$ sudo fail2ban-client status ssh

# and of course check log to see if it is working:
$ sudo tail -f /var/log/fail2ban.log 

Protecting WordPress Comments

By default fail2ban protect many ssh but let’s assume you want to protect WordPress from spam bots trying to post comments on your blog.

First we add a filter to catch the attempts by creating new filter file named “/etc/fail2ban/filter.d/wordpress-comment.conf”:

$ sudo vi /etc/fail2ban/filter.d/wordpress-comment.conf 
# Block IPs trying to post many comments
failregex = ^<HOST> -.*POST /wordpress/wp-comments-post.php

Then we create a new JAIL by adding the following to “jail.local” file:

$ sudo vi /etc/fail2ban/jail.local
enabled = true
port = http,https
filter = wordpress-comment
logpath = /var/log/apache2/*access*.log
bantime = 3600
maxretry = 5

Then restart fail2ban using:

sudo service fail2ban restart

كيف تنشئ موقع انترنت مجانا

هل تملك عنوان بريد الكتروني تحت gmail او hotmail؟ ماذا لو بحثت عن اسمك هل ستظهر صفحتك في تويتر او فيس بوك؟

لماذا لا تقوم بحجز اسم نطاق خاص بك مثل ويكون عنوانك البريدي هو، وبالمجان ايضاً.

نحتاج للخدمات ما يلي:

  1. تسجيل اسم النطاق domain: المركز السعودي لمعلومات الشبكة (SaudiNIC) يقدم نطاق تحت العنوان “.sa” مجانا.
  2. استضافة اسم النطاق: موقع ClouDNS يقدم خدمة استضافة النطاقات مجانا. (نعم تسجيل النطاق شئ واستضافته شئ آخر) اتمنى لو يقدم المركز السعودي هذه الخدمة مجانا ايضاً.
  3. استضافة موقع الويب، والخيارات هنا عديدة ما اعرفها هي:
    1. Github Pages مع اسم نطاق مخصص Setting up a custom domain with GitHub Pages
    2. موقع مع اسم نطاق مخصص Using a custom domain name
  4. استضافة البريد: موقع ClouDNS يقدم خدمة استقبال واعادة ارسال البريد Email Forwarding لثلاث حسابات مجانا.

Accelerating Postgres connections with PgBouncer

PgBouncer is a lightweight connection pooler for PostgreSQL, connection pooling makes Postgres connection much faster, which is important in Web applications.

Here I will explain the steps I used to configure it under Ubuntu 14.04 LTS.

Step 1: We would configure users allowed to connect to PgBouncer:

$ sudo vi /etc/pgbouncer/userlist.txt
"rayed"  "pgbouncer_password_for_rayed"

Step 2: We configure databases PgBouncer will pool for, and how PgBouncer will authenticate user:

$ sudo vi /etc/pgbouncer/pgbouncer.ini
rayed = host=localhost user=rayed password=postgres_password_for_rayed
auth_type = md5
;auth_file = /8.0/main/global/pg_auth
auth_file = /etc/pgbouncer/userlist.txt

The default value for “auth_type” is “trust” which means a system user “rayed” will be allowed to connect to Postgres user “rayed”, I change to “md5″ to force a password checking from the file “/etc/pgbouncer/userlist.txt”.

Step 3: We will allow PgBouncer to start:

$ sudo vi /etc/default/pgbouncer 

The default value is “0” which means don’t start PgBouncer ever, it is a way to make sure you know what you are doing :)

Step 4: Starting pgBouncer:

$ sudo service pgbouncer start

Step 5: Test the connection, by default “psql” connect using port “5432”, and pgBouncer use “6432”, so to test a pgBouncer connection we would use the following command:

$ psql -p 6432 

If you get “Auth failed” error make, make sure the password you entered is the one you typed in step 1, if the command doesn’t ask for password try it with “-W” option, e.g. “psql -p 6432 -W”.