Solving Python virtualenv “DistributionNotFound: distribute”

After upgrading my Ubuntu machine from 12.04 to 14.04 I had this error on virtualenv wrapper:

stevedore.extension distribute
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 75, in _load_plugins
    invoke_kwds,
  File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 87, in _load_one_plugin
    plugin = ep.load()
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2087, in load
    if require: self.require(env, installer)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2100, in require
    working_set.resolve(self.dist.requires(self.extras),env,installer)))
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 628, in resolve
    raise DistributionNotFound(req)
DistributionNotFound: distribute

After some investigation I found out the cause of the error, it seems that I’ve installed virtualenv-wrapper using pip and not Ubuntu apt-get, so when I installed it using apt-get it conflicted with the pip installation.

Solution

  • Remove virtualenvwrapper Ubuntu package: sudo aptitude remove virtualenvwrapper
  • Remove virtualenvwrapper pip package: sudo pip uninstall virtualenvwrapper virtualenv-clone virtualenv stevedore
  • Reinstall virtualenvwrapper Ubuntu package: sudo aptitude install virtualenvwrapper

Monitoring Servers with Munin

This is a draft on configuring Munin to monitor services on a Linux machine, still dirty but published for my reference, if you have question let me know.

Monitoring Servers

sudo apt-get install munin

sudo htpasswd -c /etc/munin/htpasswd rayed

vi vi /etc/munin/munin-conf.d/example_com_site
:
[munin-node.example.com]
    address munin-node.example.com
    use_node_name yes
:


sudo vi vi /etc/nginx/sites-enabled/default
server {
:
location /munin/static/ {
        alias /etc/munin/static/;
        expires modified +1w;
}

location /munin/ {
        auth_basic            "Restricted";
        auth_basic_user_file  /etc/munin/htpasswd;

        alias /var/cache/munin/www/;
        expires modified +310s;
}
:
}

Monitored Servers

sudo apt-get install munin-node
sudo apt-get install munin-plugins-extra

sudo vi /etc/munin/munin-node.conf
:
allow ^172\.18\.100\.100$   # monitoring server address
:

Muni-node-configure

munin-node-configure is really useful command you can use it install all the plugins you need, when you run it will try to test if a plugin can be used or not (–suggest argument), and even give the commands needed to link the plugin automatically (–shell argument)

sudo munin-node-configure --suggest
sudo munin-node-configure --shell

MySQL plugin

sudo apt-get install libcache-perl libcache-cache-perl

sudo munin-node-configure --suggest --shell | sh

sudo munin-run mysql_commands 

sudo service munin-node restart

Memcached plugin

sudo aptitude install libcache-memcached-perl

sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_bytes
sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_counters
sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_rates

sudo munin-run memcached_counters 

sudo service munin-node restart

Nginx Web Server

Configure Nginx to repoer its status under the URL http://localhost/nginx_status, which will be read from Munin Nginx plugin:

sudo vi /etc/nginx/sites-enabled/default
server {
:
        # STATUS
        location /nginx_status {
                stub_status on;
                access_log off;
                allow 127.0.0.1;
                allow ::1;
                allow my_ip;
                deny all;
        }
:
}

sudo service nginx restart

Configure Munin:

sudo apt-get install libwww-perl

sudo ln -s '/usr/share/munin/plugins/nginx_request' '/etc/munin/plugins/nginx_request'
sudo ln -s '/usr/share/munin/plugins/nginx_status' '/etc/munin/plugins/nginx_status'

sudo munin-run nginx_request

sudo service munin-node restart

Postgres

It is better to use munin-node-configure to configure and install postgres plugin, because it will detect installed databases and configure a graph for each.

sudo apt-get install libdbd-pg-perl
sudo munin-node-configure --suggest
sudo sh -c  'munin-node-configure --shell | grep postgres | sh '
sudo service munin-node restart

Protect your Server with Fail2Ban

Fail2ban is a program that scan your log files for any malicious behavior, and automatically block the offending IP.

The default Fail2ban installation on Ubuntu will protect ssh, but in this article I will show how to protect against WordPress comment spammers too, to slow them down.

Installation & Configuration

# Install fail2ban
$ sudo apt-get install fail2ban

# Copy default config to custom config
$ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

# Add your own IPs so they never get blocked
$ sudo vi /etc/fail2ban/jail.local
:
ignoreip = 127.0.0.1/8 10.0.0.0/8 192.168.1.5
:

# restart it
$ sudo service fail2ban restart

Fail2ban is now configured and running.

You can use the following commands to inspect and trouble shoot its operation:

# fail2ban usually add new rules to your IPTables
$ sudo iptables -L

# You can check the status of specific rules using the command:
$ sudo fail2ban-client status ssh

# and of course check log to see if it is working:
$ sudo tail -f /var/log/fail2ban.log 

Protecting WordPress Comments

By default fail2ban protect many services including ssh but let’s assume you want to protect WordPress from spam bots trying to post comments on your blog.

First we add a filter to catch the attempts by creating new filter file named “/etc/fail2ban/filter.d/wordpress-comment.conf”:

$ sudo vi /etc/fail2ban/filter.d/wordpress-comment.conf 
#
# Block IPs trying to post many comments
#
[Definition]
failregex = ^<HOST> -.*POST /wordpress/wp-comments-post.php

Then we create a new JAIL by adding the following to “jail.local” file:

$ sudo vi /etc/fail2ban/jail.local
:
:
[wordpress-comment]
enabled = true
port = http,https
filter = wordpress-comment
logpath = /var/log/apache2/*access*.log
bantime = 3600
maxretry = 5

Then restart fail2ban using:

sudo service fail2ban restart

Note: To test if your filter work you can use the command “fail2ban-regex”:

fail2ban-regex /var/log/apache2/other_vhosts_access.log filter.d/wordpress-comment.conf 

Accelerating Postgres connections with PgBouncer

PgBouncer is a lightweight connection pooler for PostgreSQL, connection pooling makes Postgres connection much faster, which is important in Web applications.

Here I will explain the steps I used to configure it under Ubuntu 14.04 LTS.

Step 1: We would configure users allowed to connect to PgBouncer:

$ sudo vi /etc/pgbouncer/userlist.txt
"rayed"  "pgbouncer_password_for_rayed"

Step 2: We configure databases PgBouncer will pool for, and how PgBouncer will authenticate user:

$ sudo vi /etc/pgbouncer/pgbouncer.ini
:
[databases]
rayed = host=localhost user=rayed password=postgres_password_for_rayed
:
auth_type = md5
;auth_file = /8.0/main/global/pg_auth
auth_file = /etc/pgbouncer/userlist.txt


The default value for “auth_type” is “trust” which means a system user “rayed” will be allowed to connect to Postgres user “rayed”, I change to “md5” to force a password checking from the file “/etc/pgbouncer/userlist.txt”.

Step 3: We will allow PgBouncer to start:

$ sudo vi /etc/default/pgbouncer 
:
START=1
:


The default value is “0” which means don’t start PgBouncer ever, it is a way to make sure you know what you are doing 🙂

Step 4: Starting pgBouncer:

$ sudo service pgbouncer start

Step 5: Test the connection, by default “psql” connect using port “5432”, and pgBouncer use “6432”, so to test a pgBouncer connection we would use the following command:

$ psql -p 6432 


If you get “Auth failed” error make, make sure the password you entered is the one you typed in step 1, if the command doesn’t ask for password try it with “-W” option, e.g. “psql -p 6432 -W”.

Ubuntu new server checklist

Create Admin User

As root create new user for management, after that you should never use root:

root# adduser myuser
root# passwd myuser
# Add user to sudo group
root# usermod -a -G sudo myuser

You should logout from “root” and login again using your new “user”

Setting Up Admin User

Add your public key to the admin user for password less access

$ mkdir ~/.ssh
$ vi ~/.ssh/authorized_keys
:
paste your public key e.g. id_rsa.pub
:

Setup

Change the default editor from nano to vi (if you want):

$ sudo update-alternatives --config editor

Setup system update without a password:

$ sudo visudo 
:
Cmnd_Alias APTITUDE = /usr/bin/aptitude update, /usr/bin/aptitude upgrade
%sudo ALL=(ALL) NOPASSWD: APTITUDE

Fix the timezone:

$ sudo dpkg-reconfigure tzdata

Install “ntp” if not already installed:

$ sudo aptitude install ntp

Change hostname

$ sudo hostname s5.rayed.com
$ sudo sh -c "echo 's5.rayed.com' > /etc/hostname "
$ sudo vi /etc/resolv.conf
:
domain rayed.com
search rayed.com
:
$ sudo vi /etc/hosts 
:
178.79.x.x  s5     s5.rayed.com
:

Update the machine

$ sudo aptitude update
$ sudo aptitude upgrade
$ sudo reboot

Configure Mail Server

I usually install Exim mail server as “internet site” but listening to localhost only, this the needed commands:


sudo aptitude install exim4-daemon-light
sudo dpkg-reconfigure exim4-config

Try sending an email to your self, and check the log:


date | sendmail rayed@example.com
sudo tail -f /var/log/exim4/mainlog

PostgreSQL Replication

We will assume we have two servers:

  • Master IP 192.168.100.51
  • Slave IP 192.168.100.52

MASTER

Create replication user “repl”:

sudo -u postgres createuser -P repl
sudo -u postgres psql
> ALTER ROLE repl REPLICATION;

Allow “repl” user to connect from the slave:

$ vi /etc/postgresql/9.1/main/pg_hba.conf
:
host    replication     repl            192.168.100.52/32        md5

Setup replication parameters:

vi /etc/postgresql/9.1/main/postgresql.conf
:   
listen_addresses = '*'
wal_level = hot_standby
max_wal_senders = 1
wal_keep_segments = 5
:

Create a copy of the database and copy to the slave:

service postgresql stop
cd /var/lib/postgresql/9.1
tar -czf main.tar.gz main/
service postgresql start
scp main.tar.gz rayed@slave:

SALVE

Restore data from master:

service postgresql stop
cd /var/lib/postgresql/9.1
rm -rf main   # BE CAREFUL MAKE SURE YOU ARE IN SLAVE
tar -xzvf ~rayed/main.tar.gz
rm /var/lib/postgresql/9.1/main/postmaster.pid

Create recovery.conf file in data directory:

vi /var/lib/postgresql/9.1/main/recovery.conf
:   
standby_mode = 'on'
primary_conninfo = 'host=192.168.100.51 port=5432 user=repl password=repl_password
:   

Edit postgresql.conf:

vi /etc/postgresql/9.1/main/postgresql.conf
:
hot_standby = on
:

Start the slave server again, and check the log file

service postgresql start
tail -f /var/log/postgresql/postgresql-9.1-main.log 

For more info:

Backup Journey to rsnapshot

When I started producing backup worthy files (code, documents, projects, etc …), I realised the importance of backups after losing important files which happens to everybody. So I started my journey with backup solutions.

Backup generation 1: My first backup was simple directory copy operation, I copied my important directories to external floppy (then CD), and since it is manual operation I always forget about it and my backups was always old.

Backup Generation 2: Later when I moved to Linux I automated the backup process using a “cron” job, I backed up everything daily to a single file ‘backup.tar.gz’

Backup Generation 3: One day I noticed that I deleted a file by mistake … no problem I’ll restore it from backup … but it wasn’t there! I realised that I deleted the folder 2 days ago and the backup is overwritten daily! The solution is to backup daily to a different file name e.g. ‘backup-monday.tar.gz’ to have one week worth of backups.

Backup Generation 4: It happened again I deleted a file and had to restore from backup, this time I am prepared 🙂 Unarchive ‘backup-moday.tar.gz’ and couldn’t find the file, try ‘backup-sunday.tar.gz’ not found either, finally I found it on ‘backup-saturday.tar.gz’, it took me a while but at least I found the file. But now I have another problem, all these backups are taking large amount of my disk space.

So far the problems I have are:

  • Backups takes long time to complete: I have to copy all files and directories and compress them!
  • Backups eat my disk space: complete backup for 7 days is too much to handle, I also want weekly and monthly backups but can’t afford to lose more disk space!
  • Searching and restoring the backup is very slow process.

Then I found rsnapshot!

rsnapshot

rsnapshot is backup tool that solve all my previous problems and more, this how it works:

Using rsync and hard links, it is possible to keep multiple, full backups instantly available. The disk space required is just a little more than the space of one full backup, plus incrementals.

Installation

To install rsnapshot from Ubuntu or Debian systems:

$ sudo aptitude install rsnapshot

Activation

rsnapshot isn’t a deamon (server or a service), it works periodically as a cron job, and by default it is disabled, to activate open the file /etc/cron.d/rsnapshot and uncomment all jobs:

$ sudo vi /etc/cron.d/rsnapshot
# Uncomment all lines to activate rsnapshot 
0 */4 * * * root /usr/bin/rsnapshot hourly 
30 3 * * * root /usr/bin/rsnapshot daily 
0 3 * * 1 root /usr/bin/rsnapshot weekly 
30 2 1 * * root /usr/bin/rsnapshot monthly 

Configuration

The default configuration for rsnapshot is to backup the following local directories, /home, /etc, and /usr/local. If you want to change it edit the file /etc/rsnapshot.conf.

$ sudo vi /etc/rsnapshot.conf 
:
snapshot_root /var/cache/rsnapshot/
:
retain          hourly  6
retain          daily   7
retain          weekly  4
retain          monthly  3
:
# LOCALHOST
backup  /home/          localhost/
backup  /etc/           localhost/
backup  /usr/local/     localhost/

Where is My Data?

rsnapshot backup everything in the directory defined in snapshot_root in the config file, by default it is /var/cache/rsnapshot/, after running for few days you would have the following directory structure:

/var/cache/rsnapshot
                   hourly.0
                           localhost
                                    etc
                                    home
                                    usr
                   hourly.1
                   :
                   daily.0
                   daily.1
                   :
                   weekly.0
                   weekly.1
                   :
                   monthly.0
                   monthly.1
                   :

Of course the number of directories reflect the retain value in the configuration.

What I have now is a the following backups:

  • Hourly backup: performed every 4 hours, and I keep the last 6 versions, i.e. 24 hours worth of backups.
  • Daily backup: I keep the last 7 version to cover the whole week.
  • Weekly backup: I keep the last 4 weeks to cover a whole month.
  • Monthly backup: I keep the last 4 monthly backups.

To give you a perspective on how much rsnapshot disk space the hourly.0 size is 7 GB, hourly.1 size is only 120 MB

NOTE: You would need root permission to access the directory /var/cache/rsnapshot