Go Language Setup for Multiple Projects

When working with Go language you must setup the GOPATH environment variable, but soon you will face two problems:

  • Each project should have its own Go dependencies and its own Git code repo, so putting your source under GOPATH would be problematic.
  • When working with “Atom” with “Go Plus” plugin, it needs to install several Go packages which would pollute your own source.

To solve both problems I added the following to my “.bash_login”:

export GOPATH=$HOME/go
alias atom='PATH=$PATH:$HOME/go/bin GOPATH=$HOME/go:$GOPATH atom'
gpp() {
        export GOPATH=`pwd`
}

It perform the following:

  • Set a default GOPATH to $HOME/go_sandbox for testing small Go projects
  • Set an alias for “atom” to add an extra dir in GOPATH and PATH, so whatever GO Plus plugin add won’t be add to your current GOPATH
  • Setup a “gpp” function to quickly change GOPATH to current directory

Limit SSH to Copy a Single File Only

I want to allow host-2 to copy a file securely from host-1, so the easiest way is to use “scp” command which use “ssh” as a transport to copy the file.

If you want to do it manually it is straight forward “scp” invocation:

host-2$ scp host-1:data.csv .

But if you want to automate it you have to use “ssh” keys, but this means leaving a private ssh key on host-2 that can access host-1 without any restriction, i.e.

host-2$ ssh host-1  # FULL ACCESS NO PASSWORD NEEDED!!

A better way is to generate a new ssh-key on host-2, like:

host-2$ ssh-keygen
:
host-2$ ls ~/.ssh/id_rsa*
id_rsa
id_rsa.pub
host-2$ cat id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHBoO5JciwnRKWzbmZiZ68J7Vouim+ZUNvmsXYeCFa6TDGTmG9Wh1KhAAgQDqTuwL9BcgbOM2qiwOlLMREtH6LYLbbp9RIBIGNb0a8UL3Fka++vziHkTgaqPJ2Uq0Qd8J0oZCqseBQqSMlebO4BxOYuRMqEFn7ETR5N+SM/hq5PeuS5SVGnleJOqaO8Cq5AcoIdlYeRXjDIFw9x7DugHKP4uBTr2o+lft7seyHjYOmrWiX0+GFiDsdTzqIMC+Px3pqY8Hcd4DC2lmYDJCDG7Js3zzvzp8Xs6sBEwqZpECh8TmXZxl5/OHt8XtVCJs0lfqiHhQWFIlsYqPg+4AsjiUP

Then add the the key to host-1 authorized_keys file with one small change:

host-1$ vi ~/.ssh/authorized_keys
:
command="scp -f data.csv" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHBoO5JciwnRKWzbmZiZ68J7Vouim+ZUNvmsXYeCFa6TDGTmG9Wh1KhAAgQDqTuwL9BcgbOM2qiwOlLMREtH6LYLbbp9RIBIGNb0a8UL3Fka++vziHkTgaqPJ2Uq0Qd8J0oZCqseBQqSMlebO4BxOYuRMqEFn7ETR5N+SM/hq5PeuS5SVGnleJOqaO8Cq5AcoIdlYeRXjDIFw9x7DugHKP4uBTr2o+lft7seyHjYOmrWiX0+GFiDsdTzqIMC+Px3pqY8Hcd4DC2lmYDJCDG7Js3zzvzp8Xs6sBEwqZpECh8TmXZxl5/OHt8XtVCJs0lfqiHhQWFIlsYqPg+4AsjiUP

Notice the command part, which limit the given key to a given command.

NOTE: the public key is the same one generated from previous step on host-2

Now if you try to access the machine it will fail.

host-2$ ssh host-1
Connection to host-1 closed.

Even if you try to copy another file it will download the file you specify in the authorized_keys:

host-2$ scp host-1:data.xml .
data.csv    100%

Notice that it downloaded the data.csv and not data.xml!

“sar” command cheat sheet

“sar” is a Unix command that collect, report, or save system activity information, it is different from other system status command like “top” or “vmstat” that only show real time status only, “sar” in the other hand collect these data so you can find the system state at any time.

OPTIONS

# Live values: interval count
sar 1 3

# historical values
sar

Previous Days

# Day 11 of current month Ubuntu
sar -f /var/log/sysstat/sa11

# Day 11 of current Month CentOS
sar -f /var/log/sa/sa11

Time Range

# show from 10:00 am to 11:00 am
sar -s 10:00:00 -e 11:00:00

Data Options

sar      # CPU 
sar -r   # RAM
sar -b   # Disk

Mixing options

sar   -b   -s 10:00:00 -e 11:00:00   -f /var/log/sa/sa11  
-b   # disk 
-s   # from 10:00 to 11:00
-f   # day 11

Installation

CentOS

$ sudo yum install sysstat

Ubuntu

$ sudo apt-get install sysstat
$ sudo vi /etc/default/sysstat
ENABLED=”true”

More info:
http://www.linuxjournal.com/content/sysadmins-toolbox-sar

Monitoring Servers with Munin

This is a draft on configuring Munin to monitor services on a Linux machine, still dirty but published for my reference, if you have question let me know.

Monitoring Servers

sudo apt-get install munin

sudo htpasswd -c /etc/munin/htpasswd rayed

vi vi /etc/munin/munin-conf.d/example_com_site
:
[munin-node.example.com]
    address munin-node.example.com
    use_node_name yes
:


sudo vi vi /etc/nginx/sites-enabled/default
server {
:
location /munin/static/ {
        alias /etc/munin/static/;
        expires modified +1w;
}

location /munin/ {
        auth_basic            "Restricted";
        auth_basic_user_file  /etc/munin/htpasswd;

        alias /var/cache/munin/www/;
        expires modified +310s;
}
:
}

Monitored Servers

sudo apt-get install munin-node
sudo apt-get install munin-plugins-extra

sudo vi /etc/munin/munin-node.conf
:
allow ^172\.18\.100\.100$   # monitoring server address
:

Muni-node-configure

munin-node-configure is really useful command you can use it install all the plugins you need, when you run it will try to test if a plugin can be used or not (–suggest argument), and even give the commands needed to link the plugin automatically (–shell argument)

sudo munin-node-configure --suggest
sudo munin-node-configure --shell

MySQL plugin

sudo apt-get install libcache-perl libcache-cache-perl

sudo munin-node-configure --suggest --shell | sh

sudo munin-run mysql_commands 

sudo service munin-node restart

Memcached plugin

sudo aptitude install libcache-memcached-perl

sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_bytes
sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_counters
sudo ln -s /usr/share/munin/plugins/memcached_ /etc/munin/plugins/memcached_rates

sudo munin-run memcached_counters 

sudo service munin-node restart

Nginx Web Server

Configure Nginx to repoer its status under the URL http://localhost/nginx_status, which will be read from Munin Nginx plugin:

sudo vi /etc/nginx/sites-enabled/default
server {
:
        # STATUS
        location /nginx_status {
                stub_status on;
                access_log off;
                allow 127.0.0.1;
                allow ::1;
                allow my_ip;
                deny all;
        }
:
}

sudo service nginx restart

Configure Munin:

sudo apt-get install libwww-perl

sudo ln -s '/usr/share/munin/plugins/nginx_request' '/etc/munin/plugins/nginx_request'
sudo ln -s '/usr/share/munin/plugins/nginx_status' '/etc/munin/plugins/nginx_status'

sudo munin-run nginx_request

sudo service munin-node restart

Postgres

It is better to use munin-node-configure to configure and install postgres plugin, because it will detect installed databases and configure a graph for each.

sudo apt-get install libdbd-pg-perl
sudo munin-node-configure --suggest
sudo sh -c  'munin-node-configure --shell | grep postgres | sh '
sudo service munin-node restart

Django memory leak with gunicorn

tl;dr add “–max-requests” to Gunicorn to easily solve memory leak problems.

If you have a long running job that leaks few bytes of memory it will eventually will consume all of your memory with time.

Of course you need to find out where is the memory leak and fix it, but sometimes you can’t because it on a code that you use and not your own code.

Apache webserver solve this problem by using “MaxRequestsPerChild” directive, which tells Apache worker process to die after serving a specified number of requests (e.g. 1000), which will free all the memory the process acquired during operation.

I had a similar problem with Django under Gunicorn, my Gunicorn workers memory keep growing and growing, to solve it I used Gunicorn option “–max-requests”, which works the same as Apache’s “MaxRequestsPerChild”:

gunicorn apps.wsgi:application -b 127.0.0.1:8080 --workers 8 --max-requests 1000

Protect your Server with Fail2Ban

Fail2ban is a program that scan your log files for any malicious behavior, and automatically block the offending IP.

The default Fail2ban installation on Ubuntu will protect ssh, but in this article I will show how to protect against WordPress comment spammers too, to slow them down.

Installation & Configuration

# Install fail2ban
$ sudo apt-get install fail2ban

# Copy default config to custom config
$ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

# Add your own IPs so they never get blocked
$ sudo vi /etc/fail2ban/jail.local
:
ignoreip = 127.0.0.1/8 10.0.0.0/8 192.168.1.5
:

# restart it
$ sudo service fail2ban restart

Fail2ban is now configured and running.

You can use the following commands to inspect and trouble shoot its operation:

# fail2ban usually add new rules to your IPTables
$ sudo iptables -L

# You can check the status of specific rules using the command:
$ sudo fail2ban-client status ssh

# and of course check log to see if it is working:
$ sudo tail -f /var/log/fail2ban.log 

Protecting WordPress Comments

By default fail2ban protect many services including ssh but let’s assume you want to protect WordPress from spam bots trying to post comments on your blog.

First we add a filter to catch the attempts by creating new filter file named “/etc/fail2ban/filter.d/wordpress-comment.conf”:

$ sudo vi /etc/fail2ban/filter.d/wordpress-comment.conf 
#
# Block IPs trying to post many comments
#
[Definition]
failregex = ^<HOST> -.*POST /wordpress/wp-comments-post.php

Then we create a new JAIL by adding the following to “jail.local” file:

$ sudo vi /etc/fail2ban/jail.local
:
:
[wordpress-comment]
enabled = true
port = http,https
filter = wordpress-comment
logpath = /var/log/apache2/*access*.log
bantime = 3600
maxretry = 5

Then restart fail2ban using:

sudo service fail2ban restart

Note: To test if your filter work you can use the command “fail2ban-regex”:

fail2ban-regex /var/log/apache2/other_vhosts_access.log filter.d/wordpress-comment.conf