Category Archives: bash

Munin plugin – MegaRAID HDD temperature using MegaCLI

Munin Exchange approved my plugin recently. I submitted it for approval a few months ago that I already forgot about it. The plugin is written in Bash and it graphs temperatures of HDDs attached to a LSI MegaRaid controller.

It uses the serial numbers of the HDDs as labels:

Most of our servers, circa 2008+, uses LSI cards especially our Supermicro blades. So if you’re using LSI cards as well, check it out.

UPDATE: Munin Exchange is down. They’re moving to github so the links above are not working anymore.

UPDATE: I moved the code to GitHub. Just follow this link.


How to calculate the total bytes you’ve downloaded if you’re using a Huawei dongle in Ubuntu

Since I’m using Globe Tattoo right now and its SUPERSURF service has a daily cap of 800Mb, I have to have a way to check my usage.

And I wrote this one liner to do just that:
$ pcregrep "$(date +%b)\s+$(date +%d).+pppd.+received" /var/log/messages | perl -e 'use strict; my $t=0; while(<>) { if(m/received (\d+)\s+/) { $t=$t+$1; } } print "$t\n";'

If pcregrep is not installed in your system, you can install it by running : sudo apt-get install -y pcregrep

The downside of this approach is I have to disconnect first to get an accurate reading. If you have a better idea, please let me know 🙂

How-To: Thwart brute force SSH attacks in CentOS/RHEL 5

UPDATE:  This was a good exercise but I decided to replace the script with denyhosts: In CentOS, just intall the EPEL repo first, then you can install it via yum.

This is one of the problems that my team encountered when we opened up a firewall for SSH connections. Brute force SSH attacks using botnets are just everywhere! And if you’re not careful, it’s quite a headache if one of your servers was compromised.

Lot of tips can be found in the Internet and this is the approach that I came up with based on numerous sites that I’ve read.

  1. strong passwords
    DUH! This is obvious but most people ignore it. Don’t be lazy.
  2. disable root access through SSH
    Most of the time, direct root access is not needed. Disabling it is highly recommended.

    • open /etc/ssh/sshd_config
    • enable and set this SSH config to no: PermitRootLogin no
    • restart SSH: service sshd restart
  3. limit users who can log-in through SSH
    Users who can use the SSH service can be specified. Botnets often use user names that were added by an application, so listing the users can lessen the vulnerability.

    • open /etc/ssh/sshd_config
    • enable and list the users with this SSH config: AllowUsers user1 user2 user3
    • restart SSH: service sshd restart
  4. use a script to automatically block malicious IPs
    Utilizing SSH daemon’s log file (in CentOS/RHEL, it’s in /var/log/secure), a simple script can be written that can automatically block malicious IPs using tcp_wrapper’s host.deny
    If AllowUsers is enabled, the SSH daemon will log invalid attempts in this format:
    sshd[8207]: User apache from not allowed because not listed in AllowUsers
    sshd[15398]: User ftp from not allowed because not listed in AllowUsers

    SSH also logs invalid attempts in this format:sshd[6419]: Failed password for invalid user zabbix from port 50962 ssh2Based on the information above, I came up with this script:

    # always exclude these IPs
    if [[ -e $tmp_list ]]
        rm $tmp_list
    # set the separator to new lines only
    # REGEX filter
    filter="^$(date +%b\\s*%e).+(not listed in AllowUsers|\
    Failed password.+invalid user)"
    for ip in $( pcregrep  $filter $file_log \
      | perl -ne 'if (m/from\s+([^\s]+)\s+(not|port)/) { print $1,"\n"; }' )
        if [[ $ip ]]
            echo "ALL: $ip" >> $tmp_list
    # reset
    unset IFS
    cat $file_host_deny >> $tmp_list
    sort -u $tmp_list  | pcregrep -v $exclude_ips > $file_host_deny

    I deployed the script in root’s crontab and set it to run every minute 🙂

There, of course YMMV. Always test deployments and I’m pretty sure there are a lot of other tools available 🙂

bash: tips and how-tos (1 of n)

Bash, which stands for Bourne-again shell, is a free Unix shell which is used also as a default command line for most Linux distribution. If you’re a Linux/Unix administrator or a Linux enthusiast, I’m pretty sure you’ve met the bash shell before.

These are some tricks in bash shell that I really find useful.

1. Don’t use backticks, use $( command ) instead.

I admit, this is one of the first tricks that I learned also. It’s quite adequate if you’re running just one command:


but if you’re planning to nest multiple commands:

current_pid=`cat $HOME/pid/my_pid.\`date +%y%m%d\`.pid`

Yes, you have to escape the backticks. Now, imagine if you have to nest 3 commands… What do you think it will look like?

We can write the two examples in $( command ) form,

date_today=$( date )
current_pid=$( cat $HOME/pid/my_pid.$( date +%y%m%d ).pid )

You can easily nest multiple commands and no “escapes” required. Your script will look tidier too.

2. Same command, different parameter… use {arg1,arg2,…,argN} trick:

If you find yourself running the same command with a different argument most of the time, you can find this trick useful.

user@localhost~$ wget
user@localhost~$ wget
user@localhost~$ wget

You can run this using {arg1,arg2,…,argN}:

user@localhost~$ wget{01,06,18}.log

You can even combine multiple values:

user@localhost~$ wget http://file.example.{com,net,org}/file{01,06,18}.log

Please note that this doesn’t work with some commands, like echo, so test your script first.

3. Same set of commands, different parameter… create a function.

Yes, bash supports functions. If you’re running a set commands and you want to reuse it, this is the way to go.

To declare a function:

function my_function () {
second_parameter=$2 # so on and so forth…

# things to do…

Here’s a simple script that I wrote to back up necessary configuration files in my work station.



function backup() {
local file # don’t make this variable global

file=$1 # assign first parameter to file

if [ “x$file” != “x” ] # if $file is empty, don’t process
mkdir -p $BACKUP_DIR
# create the destination directory
cp $file $BACKUP_DIR
# copy the file

backup /etc/apt/sources.list
backup /etc/samba/smb.conf
backup /etc/X11/xorg.conf

That’s it for now. I hope you find these tips useful. Thanks for dropping by.

how to: using lftp to mirror an ftp site

I encountered this problem before and back then, I tried to solve it… and I did, using wget. I totally forgot about this until it came back to haunt me… hahahaha!

Back then, I used wget with the -c option, means to continue/resume a partially downloaded file, or skip if the file is already downloaded. Note that it’s using the file size as basis… This can lead to a disaster if the file on the receiving end has the same file size but with a different content…

And so my quest to find a better solution begins… (again!)

Anyway, I stumbled on this command, lftp, which has a good mirroring support. And so, after a quick read, I came up with this script:


dir_log="$HOME/log/$(date +%y%m)"

mkdir -p $dir_log

deb_file="$dir_log/$(date +%y%m%d).mirror.log"

lftp << EOC
debug -o $deb_file
user ftp_user ftp_password
mirror -e dir_to_mirror "$HOME/mirror_dir"

And so, just another bash script… hopefully, this one won’t haunt me 🙂

how to: using bash to kill a parent process and all spawned child processes

I got this project in a Linux environment where I have to terminate a process before running one. Sounds easy at first glance, it’s very easy to kill a process in Linux, all I have to do is get the process’ PID (process id) and terminate it using the kill command…. HA! A no-brainer problem!

But after thinking about it for a few minutes, it hit me, it’s possible that the process that I want to terminate can spawn child processes, and it’s child processes can spawn another set of child processes… and so on… and so forth… and I have to terminate all these child processes too! Aaarrggh!

And so, all because I’m lazy, I opened up Google and did a little script hunting. I found some tips but it doesn’t fit my needs. Most solutions were how to kill one level of child processes only (or maybe I did not try hard enough). After a few hours wasted on looking for a “already-done-by-others” solution, I gave up and decided to write my own….

So much for being lazy… So, on to the drawing board…
Well, I hope you can make sense of what’s the diagram above (don’t ask me how I came up with it…). If you don’t know what it is, just believe me when I say it’s a tree.

The behaviour of a process spawning child processes can be described by a tree. Based on this, all we have to do is determine the nodes at each depth. The idea is to store process ids in an array based on what depth in the process tree they belong. After generating the array, we can decide if we want to terminate processes from parent to child or vice versa. In my case, I have to terminate from parent to child.

Based on that gibberish idea above, I managed to write the following code:




while [ $quit -eq 0 ]

    # get all child processes spawned by this/these ppid/s
    ids[$index]=$(ps -o pid --ppid ${ids[$index-1]} | \
      pcregrep '\d+' | tr \\n ' ')

    # if no child processes found
    if [ ! "${ids[$index]}" ]
        # quit

# kill process from parent to all child processes
for i in $(seq 0 ${#ids[@]})
    if [ "${ids[$i]}" ]
        kill ${ids[$i]}

The code above assumes that the root process id is known. You may have to do some checking first if you have a valid root process id as parameter.

And that’s it! I’m just hoping that my laziness can appreciate fruition…

using wget to replicate a shared ftp folder

I was looking for a simple way to replicate a shared ftp folder to another machine. The replication must be done periodically via cron and it must have a way to resume in case it was interrupted.

The shared folder has the following characteristics:

  • user authentication is required when copying
  • files when uploaded are not changed/updated
  • filenames are unique and may have a directory structure

The above-mentioned requirements are simple so I though maybe I can use wget for this, it has a resume option and it also has an optional authentication method… so maybe it can work.

After some series of trials and errors, I got it working… here’s the code snippet:


user=’ftp user name here’
pass=’ftp password here’

wget –timeout=30 -nH -qrc –level=20 –user=”$user” –password=”$pass” \ –directory-prefix=”$HOME”

rsync may have a better approach, but this one worked for me and I’m quite lazy, so why bother…