Category Archives: bash

Munin plugin – MegaRAID HDD temperature using MegaCLI

Munin Exchange approved my plugin recently. I submitted it for approval a few months ago that I already forgot about it. The plugin is written in Bash and it graphs temperatures of HDDs attached to a LSI MegaRaid controller.

It uses the serial numbers of the HDDs as labels:

Most of our servers, circa 2008+, uses LSI cards especially our Supermicro blades. So if you’re using LSI cards as well, check it out.

UPDATE: Munin Exchange is down. They’re moving to github so the links above are not working anymore.

UPDATE: I moved the code to GitHub. Just follow this link.

How to calculate the total bytes you’ve downloaded if you’re using a Huawei dongle in Ubuntu

Since I’m using Globe Tattoo right now and its SUPERSURF service has a daily cap of 800Mb, I have to have a way to check my usage.

And I wrote this one liner to do just that:
$ pcregrep "$(date +%b)\s+$(date +%d).+pppd.+received" /var/log/messages | perl -e 'use strict; my $t=0; while(<>) { if(m/received (\d+)\s+/) { $t=$t+$1; } } print "$t\n";'

If pcregrep is not installed in your system, you can install it by running : sudo apt-get install -y pcregrep

The downside of this approach is I have to disconnect first to get an accurate reading. If you have a better idea, please let me know 🙂

How-To: Thwart brute force SSH attacks in CentOS/RHEL 5

UPDATE:  This was a good exercise but I decided to replace the script with denyhosts: http://denyhosts.sourceforge.net/. In CentOS, just intall the EPEL repo first, then you can install it via yum.

This is one of the problems that my team encountered when we opened up a firewall for SSH connections. Brute force SSH attacks using botnets are just everywhere! And if you’re not careful, it’s quite a headache if one of your servers was compromised.

Lot of tips can be found in the Internet and this is the approach that I came up with based on numerous sites that I’ve read.

  1. strong passwords
    DUH! This is obvious but most people ignore it. Don’t be lazy.
  2. disable root access through SSH
    Most of the time, direct root access is not needed. Disabling it is highly recommended.

    • open /etc/ssh/sshd_config
    • enable and set this SSH config to no: PermitRootLogin no
    • restart SSH: service sshd restart
  3. limit users who can log-in through SSH
    Users who can use the SSH service can be specified. Botnets often use user names that were added by an application, so listing the users can lessen the vulnerability.

    • open /etc/ssh/sshd_config
    • enable and list the users with this SSH config: AllowUsers user1 user2 user3
    • restart SSH: service sshd restart
  4. use a script to automatically block malicious IPs
    Utilizing SSH daemon’s log file (in CentOS/RHEL, it’s in /var/log/secure), a simple script can be written that can automatically block malicious IPs using tcp_wrapper’s host.deny
    If AllowUsers is enabled, the SSH daemon will log invalid attempts in this format:
    sshd[8207]: User apache from 125.5.112.165 not allowed because not listed in AllowUsers
    sshd[15398]: User ftp from 222.169.11.13 not allowed because not listed in AllowUsers

    SSH also logs invalid attempts in this format:sshd[6419]: Failed password for invalid user zabbix from 69.10.143.168 port 50962 ssh2Based on the information above, I came up with this script:

    #!/bin/bash
    
    # always exclude these IPs
    exclude_ips='192.168.60.1|192.168.60.10'
    
    file_log='/var/log/secure'
    file_host_deny='/etc/hosts.deny'
    
    tmp_list='/tmp/ips.for.restriction'
    
    if [[ -e $tmp_list ]]
    then
        rm $tmp_list
    fi
    
    # set the separator to new lines only
    IFS=$'\n'
    
    # REGEX filter
    filter="^$(date +%b\\s*%e).+(not listed in AllowUsers|\
    Failed password.+invalid user)"
    
    for ip in $( pcregrep  $filter $file_log \
      | perl -ne 'if (m/from\s+([^\s]+)\s+(not|port)/) { print $1,"\n"; }' )
    do
        if [[ $ip ]]
        then
            echo "ALL: $ip" >> $tmp_list
        fi
    done
    
    # reset
    unset IFS
    
    cat $file_host_deny >> $tmp_list
    sort -u $tmp_list  | pcregrep -v $exclude_ips > $file_host_deny

    I deployed the script in root’s crontab and set it to run every minute 🙂

There, of course YMMV. Always test deployments and I’m pretty sure there are a lot of other tools available 🙂

bash: tips and how-tos (1 of n)

Bash, which stands for Bourne-again shell, is a free Unix shell which is used also as a default command line for most Linux distribution. If you’re a Linux/Unix administrator or a Linux enthusiast, I’m pretty sure you’ve met the bash shell before.

These are some tricks in bash shell that I really find useful.

1. Don’t use backticks, use $( command ) instead.

I admit, this is one of the first tricks that I learned also. It’s quite adequate if you’re running just one command:

date_today=`date`

but if you’re planning to nest multiple commands:

current_pid=`cat $HOME/pid/my_pid.\`date +%y%m%d\`.pid`

Yes, you have to escape the backticks. Now, imagine if you have to nest 3 commands… What do you think it will look like?

We can write the two examples in $( command ) form,

date_today=$( date )
current_pid=$( cat $HOME/pid/my_pid.$( date +%y%m%d ).pid )

You can easily nest multiple commands and no “escapes” required. Your script will look tidier too.

2. Same command, different parameter… use {arg1,arg2,…,argN} trick:

If you find yourself running the same command with a different argument most of the time, you can find this trick useful.

user@localhost~$ wget http://file.example.com/file01.log
user@localhost~$ wget http://file.example.com/file06.log
user@localhost~$ wget http://file.example.com/file18.log

You can run this using {arg1,arg2,…,argN}:

user@localhost~$ wget http://file.example.com/file{01,06,18}.log

You can even combine multiple values:

user@localhost~$ wget http://file.example.{com,net,org}/file{01,06,18}.log

Please note that this doesn’t work with some commands, like echo, so test your script first.

3. Same set of commands, different parameter… create a function.

Yes, bash supports functions. If you’re running a set commands and you want to reuse it, this is the way to go.

To declare a function:

function my_function () {
first_parameter=$1
second_parameter=$2 # so on and so forth…

# things to do…
}

Here’s a simple script that I wrote to back up necessary configuration files in my work station.

#!/bin/bash

BACKUP_DIR=$HOME/config.backup

function backup() {
local file # don’t make this variable global

file=$1 # assign first parameter to file

if [ “x$file” != “x” ] # if $file is empty, don’t process
then
mkdir -p $BACKUP_DIR
# create the destination directory
cp $file $BACKUP_DIR
# copy the file
fi
}

backup /etc/apt/sources.list
backup /etc/samba/smb.conf
backup /etc/X11/xorg.conf

That’s it for now. I hope you find these tips useful. Thanks for dropping by.

how to: using lftp to mirror an ftp site

I encountered this problem before and back then, I tried to solve it… and I did, using wget. I totally forgot about this until it came back to haunt me… hahahaha!

Back then, I used wget with the -c option, means to continue/resume a partially downloaded file, or skip if the file is already downloaded. Note that it’s using the file size as basis… This can lead to a disaster if the file on the receiving end has the same file size but with a different content…

And so my quest to find a better solution begins… (again!)

Anyway, I stumbled on this command, lftp, which has a good mirroring support. And so, after a quick read, I came up with this script:


#!/bin/bash

dir_log="$HOME/log/$(date +%y%m)"

mkdir -p $dir_log

deb_file="$dir_log/$(date +%y%m%d).mirror.log"

lftp << EOC
debug -o $deb_file
open your.ftp.site.here
user ftp_user ftp_password
mirror -e dir_to_mirror "$HOME/mirror_dir"
quit
EOC

And so, just another bash script… hopefully, this one won’t haunt me 🙂