Category Archives: bash

Munin plugin – MegaRAID HDD temperature using MegaCLI

Munin Exchange approved my plugin recently. I submitted it for approval a few months ago that I already forgot about it. The plugin is written in Bash and it graphs temperatures of HDDs attached to a LSI MegaRaid controller.

It uses the serial numbers of the HDDs as labels:

Most of our servers, circa 2008+, uses LSI cards especially our Supermicro blades. So if you’re using LSI cards as well, check it out.

UPDATE: Munin Exchange is down. They’re moving to github so the links above are not working anymore.

UPDATE: I moved the code to GitHub. Just follow this link.

Advertisements

How to calculate the total bytes you’ve downloaded if you’re using a Huawei dongle in Ubuntu

Since I’m using Globe Tattoo right now and its SUPERSURF service has a daily cap of 800Mb, I have to have a way to check my usage.

And I wrote this one liner to do just that:
$ pcregrep "$(date +%b)\s+$(date +%d).+pppd.+received" /var/log/messages | perl -e 'use strict; my $t=0; while(<>) { if(m/received (\d+)\s+/) { $t=$t+$1; } } print "$t\n";'

If pcregrep is not installed in your system, you can install it by running : sudo apt-get install -y pcregrep

The downside of this approach is I have to disconnect first to get an accurate reading. If you have a better idea, please let me know 🙂

How-To: Thwart brute force SSH attacks in CentOS/RHEL 5

UPDATE:  This was a good exercise but I decided to replace the script with denyhosts: http://denyhosts.sourceforge.net/. In CentOS, just intall the EPEL repo first, then you can install it via yum.

This is one of the problems that my team encountered when we opened up a firewall for SSH connections. Brute force SSH attacks using botnets are just everywhere! And if you’re not careful, it’s quite a headache if one of your servers was compromised.

Lot of tips can be found in the Internet and this is the approach that I came up with based on numerous sites that I’ve read.

  1. strong passwords
    DUH! This is obvious but most people ignore it. Don’t be lazy.
  2. disable root access through SSH
    Most of the time, direct root access is not needed. Disabling it is highly recommended.

    • open /etc/ssh/sshd_config
    • enable and set this SSH config to no: PermitRootLogin no
    • restart SSH: service sshd restart
  3. limit users who can log-in through SSH
    Users who can use the SSH service can be specified. Botnets often use user names that were added by an application, so listing the users can lessen the vulnerability.

    • open /etc/ssh/sshd_config
    • enable and list the users with this SSH config: AllowUsers user1 user2 user3
    • restart SSH: service sshd restart
  4. use a script to automatically block malicious IPs
    Utilizing SSH daemon’s log file (in CentOS/RHEL, it’s in /var/log/secure), a simple script can be written that can automatically block malicious IPs using tcp_wrapper’s host.deny
    If AllowUsers is enabled, the SSH daemon will log invalid attempts in this format:
    sshd[8207]: User apache from 125.5.112.165 not allowed because not listed in AllowUsers
    sshd[15398]: User ftp from 222.169.11.13 not allowed because not listed in AllowUsers

    SSH also logs invalid attempts in this format:sshd[6419]: Failed password for invalid user zabbix from 69.10.143.168 port 50962 ssh2Based on the information above, I came up with this script:

    #!/bin/bash
    
    # always exclude these IPs
    exclude_ips='192.168.60.1|192.168.60.10'
    
    file_log='/var/log/secure'
    file_host_deny='/etc/hosts.deny'
    
    tmp_list='/tmp/ips.for.restriction'
    
    if [[ -e $tmp_list ]]
    then
        rm $tmp_list
    fi
    
    # set the separator to new lines only
    IFS=$'\n'
    
    # REGEX filter
    filter="^$(date +%b\\s*%e).+(not listed in AllowUsers|\
    Failed password.+invalid user)"
    
    for ip in $( pcregrep  $filter $file_log \
      | perl -ne 'if (m/from\s+([^\s]+)\s+(not|port)/) { print $1,"\n"; }' )
    do
        if [[ $ip ]]
        then
            echo "ALL: $ip" >> $tmp_list
        fi
    done
    
    # reset
    unset IFS
    
    cat $file_host_deny >> $tmp_list
    sort -u $tmp_list  | pcregrep -v $exclude_ips > $file_host_deny

    I deployed the script in root’s crontab and set it to run every minute 🙂

There, of course YMMV. Always test deployments and I’m pretty sure there are a lot of other tools available 🙂

bash: tips and how-tos (1 of n)

Bash, which stands for Bourne-again shell, is a free Unix shell which is used also as a default command line for most Linux distribution. If you’re a Linux/Unix administrator or a Linux enthusiast, I’m pretty sure you’ve met the bash shell before.

These are some tricks in bash shell that I really find useful.

1. Don’t use backticks, use $( command ) instead.

I admit, this is one of the first tricks that I learned also. It’s quite adequate if you’re running just one command:

date_today=`date`

but if you’re planning to nest multiple commands:

current_pid=`cat $HOME/pid/my_pid.\`date +%y%m%d\`.pid`

Yes, you have to escape the backticks. Now, imagine if you have to nest 3 commands… What do you think it will look like?

We can write the two examples in $( command ) form,

date_today=$( date )
current_pid=$( cat $HOME/pid/my_pid.$( date +%y%m%d ).pid )

You can easily nest multiple commands and no “escapes” required. Your script will look tidier too.

2. Same command, different parameter… use {arg1,arg2,…,argN} trick:

If you find yourself running the same command with a different argument most of the time, you can find this trick useful.

user@localhost~$ wget http://file.example.com/file01.log
user@localhost~$ wget http://file.example.com/file06.log
user@localhost~$ wget http://file.example.com/file18.log

You can run this using {arg1,arg2,…,argN}:

user@localhost~$ wget http://file.example.com/file{01,06,18}.log

You can even combine multiple values:

user@localhost~$ wget http://file.example.{com,net,org}/file{01,06,18}.log

Please note that this doesn’t work with some commands, like echo, so test your script first.

3. Same set of commands, different parameter… create a function.

Yes, bash supports functions. If you’re running a set commands and you want to reuse it, this is the way to go.

To declare a function:

function my_function () {
first_parameter=$1
second_parameter=$2 # so on and so forth…

# things to do…
}

Here’s a simple script that I wrote to back up necessary configuration files in my work station.

#!/bin/bash

BACKUP_DIR=$HOME/config.backup

function backup() {
local file # don’t make this variable global

file=$1 # assign first parameter to file

if [ “x$file” != “x” ] # if $file is empty, don’t process
then
mkdir -p $BACKUP_DIR
# create the destination directory
cp $file $BACKUP_DIR
# copy the file
fi
}

backup /etc/apt/sources.list
backup /etc/samba/smb.conf
backup /etc/X11/xorg.conf

That’s it for now. I hope you find these tips useful. Thanks for dropping by.

how to: using lftp to mirror an ftp site

I encountered this problem before and back then, I tried to solve it… and I did, using wget. I totally forgot about this until it came back to haunt me… hahahaha!

Back then, I used wget with the -c option, means to continue/resume a partially downloaded file, or skip if the file is already downloaded. Note that it’s using the file size as basis… This can lead to a disaster if the file on the receiving end has the same file size but with a different content…

And so my quest to find a better solution begins… (again!)

Anyway, I stumbled on this command, lftp, which has a good mirroring support. And so, after a quick read, I came up with this script:


#!/bin/bash

dir_log="$HOME/log/$(date +%y%m)"

mkdir -p $dir_log

deb_file="$dir_log/$(date +%y%m%d).mirror.log"

lftp << EOC
debug -o $deb_file
open your.ftp.site.here
user ftp_user ftp_password
mirror -e dir_to_mirror "$HOME/mirror_dir"
quit
EOC

And so, just another bash script… hopefully, this one won’t haunt me 🙂

how to: using bash to kill a parent process and all spawned child processes

I got this project in a Linux environment where I have to terminate a process before running one. Sounds easy at first glance, it’s very easy to kill a process in Linux, all I have to do is get the process’ PID (process id) and terminate it using the kill command…. HA! A no-brainer problem!

But after thinking about it for a few minutes, it hit me, it’s possible that the process that I want to terminate can spawn child processes, and it’s child processes can spawn another set of child processes… and so on… and so forth… and I have to terminate all these child processes too! Aaarrggh!

And so, all because I’m lazy, I opened up Google and did a little script hunting. I found some tips but it doesn’t fit my needs. Most solutions were how to kill one level of child processes only (or maybe I did not try hard enough). After a few hours wasted on looking for a “already-done-by-others” solution, I gave up and decided to write my own….

So much for being lazy… So, on to the drawing board…
Well, I hope you can make sense of what’s the diagram above (don’t ask me how I came up with it…). If you don’t know what it is, just believe me when I say it’s a tree.

The behaviour of a process spawning child processes can be described by a tree. Based on this, all we have to do is determine the nodes at each depth. The idea is to store process ids in an array based on what depth in the process tree they belong. After generating the array, we can decide if we want to terminate processes from parent to child or vice versa. In my case, I have to terminate from parent to child.

Based on that gibberish idea above, I managed to write the following code:

#!/bin/bash

ids[0]="$1"

index=0
quit=0

while [ $quit -eq 0 ]
do
    ((index++))

    # get all child processes spawned by this/these ppid/s
    ids[$index]=$(ps -o pid --ppid ${ids[$index-1]} | \
      pcregrep '\d+' | tr \\n ' ')

    # if no child processes found
    if [ ! "${ids[$index]}" ]
    then
        # quit
        ((quit++))
    fi
done

# kill process from parent to all child processes
for i in $(seq 0 ${#ids[@]})
do
    if [ "${ids[$i]}" ]
    then
        kill ${ids[$i]}
    fi
done

The code above assumes that the root process id is known. You may have to do some checking first if you have a valid root process id as parameter.

And that’s it! I’m just hoping that my laziness can appreciate fruition…

using wget to replicate a shared ftp folder

I was looking for a simple way to replicate a shared ftp folder to another machine. The replication must be done periodically via cron and it must have a way to resume in case it was interrupted.

The shared folder has the following characteristics:

  • user authentication is required when copying
  • files when uploaded are not changed/updated
  • filenames are unique and may have a directory structure

The above-mentioned requirements are simple so I though maybe I can use wget for this, it has a resume option and it also has an optional authentication method… so maybe it can work.

After some series of trials and errors, I got it working… here’s the code snippet:

#!/bin/bash

user=’ftp user name here’
pass=’ftp password here’

wget –timeout=30 -nH -qrc –level=20 –user=”$user” –password=”$pass” \
ftp://host.here/dir_to_copy/ –directory-prefix=”$HOME”

rsync may have a better approach, but this one worked for me and I’m quite lazy, so why bother…