Monthly Archives: May 2012

How to setup large partitions (>2TB RAID arrays) in CentOS 6.2 with a Supermicro Blade SBI-7125W-S6

We’re on the process of retiring our non-blade servers to free up space and reduce power usage. This move affects our 1U backups servers so we have to migrate it to blades as well.

I was setting-up a blade server as a replacement for one of our backup servers when I encountered a problem…

But before I get into that, here’s the specs of the blade:

  • Supermicro Blade SBI-7125W-S6 (circa 2008)
  • Intel Xeon E5405
  • 8 GB DDR2
  • LSI RAID 1078
  • 6 x 750 GB Seagate Momentus XT (ST750LX003)

The original plan was to set-up these drives as a RAID 5 array, about 3.5+ TB in size. The RAID controller can handle the size. So Rich, my colleague who did the initial setup of  the blade & the hard drives, did not encounter a problem.

I was cruising through the remote installation process until I hit a snag in the disk partitioning stage. The installer won’t use the entire space of the RAID array. It will only create partition(s) as long as the total size is 2TB.

I find it unusual because I’ve created bigger arrays before using software RAID and this problem did not manifest. After a little googling I found out that it has something to do with the limitations of Master Boot Record (or MBR). The solution is to use the GUID partition table (or GPT) as advised by this discussion.

I have two options at this point,

  1. go as originally planned, use GPT, and hope that the SBI-7125W-S6 can boot with it, or…
  2. create 2 arrays, one small (that will use MBR so the server can boot) and one large (that will use GPT  so that the disk space can be used in its entirety)

I tried option #1, it failed. The blade won’t boot at all. Primarily because the server has a BIOS, not an EFI.

And so I’m left with option #2…

The server has six drives. To implement option #2, my plan was to create this setup:

  • 2 drives at RAID 1 – will become /dev/sda, MBR, 750GB, main system drive (/)
  • 4 drives at RAID5 – will become /dev/sdb, GPT, 2.x+TB, will be mounted later

The LSI RAID 1078 can support this kind of setup, so I’m in luck. I decided to use RAID 1 & RAID 5 because redundancy is the primary concern, size is secondary.

This is where IPMI shines, I can reconfigure the RAID array remotely using the KVM console of IPMIView like I’m physically there at the data center 🙂 With the KVM access, I created 2 disk groups using the Web BIOS of the RAID controller.

Now that the arrays are up, I went through the CentOS 6 installation process again. The installer detected the 2 arrays, so no problem there. I configured /dev/sda with 3 partitions and  left /dev/sdb unconfigured (it can be configured easily later once CentOS is up).

In case you’re wondering, I added a 3.8GB LVM PV since this server will become a node of our ganeti cluster, to store VM snapshots.

The CentOS installation booted successfully this time. Now that the system’s working, it’s time to configure /dev/sdb.

I installed the EPEL repo first, then parted:

$ wget -c http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-6.noarch.rpm 
$ wget -c https://fedoraproject.org/static/0608B895.txt 
$ rpm -Uvh epel-release-6-5.noarch.rpm 
$ rpm --import 0608B895.txt 
$ yum install parted

Then, I configured /dev/sdb to use GPT, then formatted the whole partition as ext4:

$ parted /dev/sdb mklabel gpt 
$ parted /dev/sdb 
(parted) mkpart primary ext4 1 -1 
(parted) quit 
$ mkfs.ext4 -L data /dev/sdb

To mount the /dev/sdb, I need to find out its UUID first:

$ ls -lh /dev/disk/by-uuid/ | grep sdb 
lrwxrwxrwx. 1 root root 9 May 12 15:07 858844c3-6fd8-47e9-90a4-0d10c0914eb5 -> ../../sdb

Once I have the right UUID, I added this line in /etc/fstab. /dev/sdb will be mounted in /home/backup/log-dump/:

UUID=858844c3-6fd8-47e9-90a4-0d10c0914eb5 /home/backup/log-dump ext4 noatime,defaults 1 1

The partition is now ready to be mounted and used:

$ useradd backup
$ mkdir -p /home/backup/log-dump
$ mount /home/backup/log-dump
$ chown backup.backup -R /home/backup/log-dump

There, another problem solved. Thanks to the internet and the Linux community 🙂

After a few of days of copying files to this new array, this is what it looks like now:

/dev/sdb is almost used up already 🙂

References:

#23 it’s the pink one



#23 it’s the pink one, originally uploaded by pro_cabales.

Via Flickr:
One of my sister-in-law’s handiwork 🙂

They’re creating some salted eggs. Once the eggs are ready, they did the obligatory paint-it-red process. Maybe there was a surplus of the coloring agent and they decided to paint the cat as well.

#22 skinned and ready for cooking

Via Flickr:
In case you’re wondering if what these are… so am I the first time I took a look at it. Any guesses?… Hint: crunchy when cooked 🙂

It’s lean, composed of skin, muscles and bones (and heads). Not much fat if you look at it closely because its primary diet is grains.

OK, before your imagination wanders off, I’ll tell you what these are. These are skinned Philippine Orioles, much locally know as “Maya”. These birds are endemic here in the Philippines, especially in the Visayas region. Farmers consider Maya birds as pests because it can damage rice crops.

And yes, I ate 3. It’s not bad really, if you don’t mind the crunching sound of bones 🙂

Release the beast! Xeon X5650 + RAID 0 Intel 710 SSD (2)

Well… not really a beast in today’s standards… but it’s a beast if we’re talking about processor blades for a 10-blade Supermicro enclosure. Supermicro released its Sandy Bridge Xeons but those are available for a 14-blade enclosure for now.

I wrote a post before about our test bed machine to figure out if SSD is a good route for us. We concluded that it is so we opted to go with the enterprise offerings of Intel, the 710 series.

The Intel 710 SSD is really not inexpensive. I decided to go with RAID 0 of 2 Intel 710 200GB because of 2 reasons: (1) to increase space & (2) to distribute writes for (much) longer write endurance.

Technical specs:

  • Supermicro SBI-7126T-S6
  • 2 x Intel Xeon X5650
  • 2 x Intel 710 200G SSD (RAID0)
  • 8 x 8 GB DDR3

The X5650 is a 6-core CPU but because of Hyper-threading, it’s registering as 12 cores.

The new blade was deployed as a ganeti node in one of our production clusters. So far, the 2 Xeon X5650s are doing well even with 8 CPU-hungry virtual machines.

I configured the server with growth in mind, thus the 64G memory. We’re not using that much RAM right now but that will change quickly once the ganeti node will host DB servers.

I was planning to perform some benchmarks once we deployed it but I haven’t got the chance… Hopefully, I’ll have the time when the next one arrives. 🙂