Category Archives: Uncategorized

PostgreSQL Replication

We will assume we have two servers:

  • Master IP
  • Slave IP


Create replication user “repl”:

sudo -u postgres createuser -P repl
sudo -u postgres psql

Allow “repl” user to connect from the slave:

$ vi /etc/postgresql/9.1/main/pg_hba.conf
host    replication     repl          md5

Setup replication parameters:

vi /etc/postgresql/9.1/main/postgresql.conf
listen_addresses = '*'
wal_level = hot_standby
max_wal_senders = 1
wal_keep_segments = 5

Create a copy of the database and copy to the slave:

service postgresql stop
cd /var/lib/postgresql/9.1
tar -czf main.tar.gz main/
service postgresql start
scp main.tar.gz rayed@slave:


Restore data from master:

service postgresql stop
cd /var/lib/postgresql/9.1
tar -xzvf ~rayed/main.tar.gz
rm /var/lib/postgresql/9.1/main/

Create recovery.conf file in data directory:

vi /var/lib/postgresql/9.1/main/recovery.conf
standby_mode = 'on'
primary_conninfo = 'host= port=5432 user=repl password=repl_password

Edit postgresql.conf:

vi /etc/postgresql/9.1/main/postgresql.conf
hot_standby = on

Start the slave server again, and check the log file

service postgresql start
tail -f /var/log/postgresql/postgresql-9.1-main.log 

For more info:

Login to PostgreSQL without typing a Password

If you to connect to PostgreSQL without typing a password, you can do it by having your password in the file “.pgpass”.

The file reside in your home directory and must be readable to you only:

touch ~/.pgpass
chmod 600 ~/.pgpass

The format is simple:


For example:


The “*” means any host using any port.

This of course will work for “psql” and other tools to like “pg_dump”.

Django “render” vs “render_to_response”

Summary: Always use render and not render_to_response

In Django you have more than one way to return a response, but many times I get confused between render and render_to_response, render_to_response seems shorter, so why not use it!

To explain let’s assume simple posts page:

def article_list(request, template_name='article/list.html'):
    posts = Post.objects.all()
    # DON'T USE
    return render_to_response(template_name, {'posts': posts})

In this example you will be able to access “posts” in your template, but unfortunately you will not have access to other important variables from other Middlewares, most importantly: user, csrf_token, and messages. To make “render_to_response” pass all these parameters you must send a “context_instance” like this:

return render_to_response(template_name, {'posts': posts}, context_instance=RequestContext(request))

Not so short after all, compare to “render” version:

return render(request, template_name, {'posts': posts})

In fact “render” is always shorter than “render_to_response”, even without the “context_instance”:

return render(request, template_name, {'posts': posts})
... vs ...
return render_to_response(template_name, {'posts': posts})

Install Python Image Library (PIL) on OSX

PIL or Python Imaging Library is a library that allows you to manipulate images in Python programming language, trying to install “PIL” using “pip” tool won’t work perfectly so here is how to proper installation.

Install Brew

Brew is a package management system that simplifies the installation of software on Mac OS X, you can install it easily with the following command:

ruby -e "$(curl -fsSL"

Install Pil

Actually we will install ”pillow” a fork of ”pil”:

brew install samueljohn/python/pillow

Configure your Python Path

To make PIL available to Python we add its path to PYTHONPATH variable:

vi ~/.bash_profile

Check it

To test if you have PIL installed:

python -c "import PIL.Image"

You shouldn’t get any errors.

My New Project: AgentX Implementation in Python

Update: I changed the module name to ”pyagentx”, thank you Daniel for the suggestion.

During this Eid vacation I spent many hours working on AgentX implementation in Python.

You can find the project in GitHub:

What is AgentX?

AgentX is protocol to extend SNMP agents, defined in RFC 2741.

But what is SNMP? let’s say you have a Linux machine you want to monitor, you will use Simple Network Management Protocol or SNMP for short, you install an SNMP agent on the machine like Net-SNMP, and from your management station you connect to the SNMP agent and ask it for the data you want report on, for example the current state of network link.

What happen if you install new software, like PostgreSQL DB, and your SNMP agent doesn’t support it, how can you monitor it! The good news SNMP agents (e.g. Net-SNMP) can be extend with multiple options, the bad news most options are very hard.

One of the most flexible options is AgentX protocol, you will need to build an application that run AgentX protocol (AgentX SubAgent), upon startup it will contact the SNMP agent (AgentX master) and register a part of the MIB tree that your app will handle, the SNMP agent (AgentX master) will forward all queries to your app which will return the result back to the master which will forward it to the management station (Cacti, NMS).

Net-SNMP already have an API to build AgentX Sub Agent, and there are Python module that utilise it, but unfortunately it doesn’t look active, and as far as I know Net-SNMP API it self isn’t the easiest thing to work with.

This is why I started working with Pure Python implementation for AgentX protocol, it is already in a working condition and tested with Net-SNMP agent, I’ve some more issues to resolve before I can use it in production.

So please if you are interested in the field give my module a try, and let me know how can improve it, and also suggest better name.

Backup Journey to rsnapshot

When I started producing backup worthy files (code, documents, projects, etc …), I realised the importance of backups after losing important files which happens to everybody. So I started my journey with backup solutions.

Backup generation 1: My first backup was simple directory copy operation, I copied my important directories to external floppy (then CD), and since it is manual operation I always forget about it and my backups was always old.

Backup Generation 2: Later when I moved to Linux I automated the backup process using a “cron” job, I backed up everything daily to a single file ‘backup.tar.gz’

Backup Generation 3: One day I noticed that I deleted a file by mistake … no problem I’ll restore it from backup … but it wasn’t there! I realised that I deleted the folder 2 days ago and the backup is overwritten daily! The solution is to backup daily to a different file name e.g. ‘backup-monday.tar.gz’ to have one week worth of backups.

Backup Generation 4: It happened again I deleted a file and had to restore from backup, this time I am prepared :) Unarchive ‘backup-moday.tar.gz’ and couldn’t find the file, try ‘backup-sunday.tar.gz’ not found either, finally I found it on ‘backup-saturday.tar.gz’, it took me a while but at least I found the file. But now I have another problem, all these backups are taking large amount of my disk space.

So far the problems I have are:

  • Backups takes long time to complete: I have to copy all files and directories and compress them!
  • Backups eat my disk space: complete backup for 7 days is too much to handle, I also want weekly and monthly backups but can’t afford to lose more disk space!
  • Searching and restoring the backup is very slow process.

Then I found rsnapshot!


rsnapshot is backup tool that solve all my previous problems and more, this how it works:

Using rsync and hard links, it is possible to keep multiple, full backups instantly available. The disk space required is just a little more than the space of one full backup, plus incrementals.


To install rsnapshot from Ubuntu or Debian systems:

$ sudo aptitude install rsnapshot


rsnapshot isn’t a deamon (server or a service), it works periodically as a cron job, and by default it is disabled, to activate open the file /etc/cron.d/rsnapshot and uncomment all jobs:

$ sudo vi /etc/cron.d/rsnapshot
# Uncomment all lines to activate rsnapshot 
0 */4 * * * root /usr/bin/rsnapshot hourly 
30 3 * * * root /usr/bin/rsnapshot daily 
0 3 * * 1 root /usr/bin/rsnapshot weekly 
30 2 1 * * root /usr/bin/rsnapshot monthly 


The default configuration for rsnapshot is to backup the following local directories, /home, /etc, and /usr/local. If you want to change it edit the file /etc/rsnapshot.conf.

$ sudo vi /etc/rsnapshot.conf 
snapshot_root /var/cache/rsnapshot/
retain          hourly  6
retain          daily   7
retain          weekly  4
retain          monthly  3
backup  /home/          localhost/
backup  /etc/           localhost/
backup  /usr/local/     localhost/

Where is My Data?

rsnapshot backup everything in the directory defined in snapshot_root in the config file, by default it is /var/cache/rsnapshot/, after running for few days you would have the following directory structure:


Of course the number of directories reflect the retain value in the configuration.

What I have now is a the following backups:

  • Hourly backup: performed every 4 hours, and I keep the last 6 versions, i.e. 24 hours worth of backups.
  • Daily backup: I keep the last 7 version to cover the whole week.
  • Weekly backup: I keep the last 4 weeks to cover a whole month.
  • Monthly backup: I keep the last 4 monthly backups.

To give you a perspective on how much rsnapshot disk space the hourly.0 size is 7 GB, hourly.1 size is only 120 MB

NOTE: You would need root permission to access the directory /var/cache/rsnapshot

Python auto complete in OSX

If you run Python shell in OSX you notice the auto completion functionality isn’t working, this is caused by Apple decision not to ship GNU readline and instead they use libedit (BSD license), to fix the problem I used the following snippet:

import readline
import rlcompleter
if 'libedit' in readline.__doc__:
    readline.parse_and_bind("bind ^I rl_complete")
    readline.parse_and_bind("tab: complete")