We’re on the process of retiring our non-blade servers to free up space and reduce power usage. This move affects our 1U backups servers so we have to migrate it to blades as well.
I was setting-up a blade server as a replacement for one of our backup servers when I encountered a problem…
But before I get into that, here’s the specs of the blade:
- Supermicro Blade SBI-7125W-S6 (circa 2008)
- Intel Xeon E5405
- 8 GB DDR2
- LSI RAID 1078
- 6 x 750 GB Seagate Momentus XT (ST750LX003)
The original plan was to set-up these drives as a RAID 5 array, about 3.5+ TB in size. The RAID controller can handle the size. So Rich, my colleague who did the initial setup of the blade & the hard drives, did not encounter a problem.
I was cruising through the remote installation process until I hit a snag in the disk partitioning stage. The installer won’t use the entire space of the RAID array. It will only create partition(s) as long as the total size is 2TB.
I find it unusual because I’ve created bigger arrays before using software RAID and this problem did not manifest. After a little googling I found out that it has something to do with the limitations of Master Boot Record (or MBR). The solution is to use the GUID partition table (or GPT) as advised by this discussion.
I have two options at this point,
- go as originally planned, use GPT, and hope that the SBI-7125W-S6 can boot with it, or…
- create 2 arrays, one small (that will use MBR so the server can boot) and one large (that will use GPT so that the disk space can be used in its entirety)
I tried option #1, it failed. The blade won’t boot at all. Primarily because the server has a BIOS, not an EFI.
And so I’m left with option #2…
The server has six drives. To implement option #2, my plan was to create this setup:
- 2 drives at RAID 1 – will become /dev/sda, MBR, 750GB, main system drive (/)
- 4 drives at RAID5 – will become /dev/sdb, GPT, 2.x+TB, will be mounted later
The LSI RAID 1078 can support this kind of setup, so I’m in luck. I decided to use RAID 1 & RAID 5 because redundancy is the primary concern, size is secondary.
This is where IPMI shines, I can reconfigure the RAID array remotely using the KVM console of IPMIView like I’m physically there at the data center 🙂 With the KVM access, I created 2 disk groups using the Web BIOS of the RAID controller.
Now that the arrays are up, I went through the CentOS 6 installation process again. The installer detected the 2 arrays, so no problem there. I configured /dev/sda with 3 partitions and left /dev/sdb unconfigured (it can be configured easily later once CentOS is up).
In case you’re wondering, I added a 3.8GB LVM PV since this server will become a node of our ganeti cluster, to store VM snapshots.
The CentOS installation booted successfully this time. Now that the system’s working, it’s time to configure /dev/sdb.
I installed the EPEL repo first, then parted:
$ wget -c http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-6.noarch.rpm $ wget -c https://fedoraproject.org/static/0608B895.txt $ rpm -Uvh epel-release-6-5.noarch.rpm $ rpm --import 0608B895.txt $ yum install parted
Then, I configured /dev/sdb to use GPT, then formatted the whole partition as ext4:
$ parted /dev/sdb mklabel gpt $ parted /dev/sdb (parted) mkpart primary ext4 1 -1 (parted) quit $ mkfs.ext4 -L data /dev/sdb
To mount the /dev/sdb, I need to find out its UUID first:
$ ls -lh /dev/disk/by-uuid/ | grep sdb lrwxrwxrwx. 1 root root 9 May 12 15:07 858844c3-6fd8-47e9-90a4-0d10c0914eb5 -> ../../sdb
Once I have the right UUID, I added this line in /etc/fstab. /dev/sdb will be mounted in /home/backup/log-dump/:
UUID=858844c3-6fd8-47e9-90a4-0d10c0914eb5 /home/backup/log-dump ext4 noatime,defaults 1 1
The partition is now ready to be mounted and used:
$ useradd backup $ mkdir -p /home/backup/log-dump $ mount /home/backup/log-dump $ chown backup.backup -R /home/backup/log-dump
There, another problem solved. Thanks to the internet and the Linux community 🙂
After a few of days of copying files to this new array, this is what it looks like now:
/dev/sdb is almost used up already 🙂