Category Archives: hardware

HW: Upgrading the workhorse from 4GB to 16GB – Macbook Pro 2012

It’s really been a while since my last post. A lot has changed since then. I have been thinking on how can I start writing again for a while. I decided to start with something light.

I was assigned with a stock MBP 2012 in my new job. Core i5, 4GB… blah blah.. just stock specs. It was OK for a while but I started hitting the 4GB ceiling when I started playing around with VMs or if I’m processing 19k+ photos in Aperture.

Things just got sloooowww…. It’s a no brainer that I really need to add more RAM. Lots of it.

So I started checking online if where can I buy RAM with best bang-for-the-buck. I paid a visit to that trusty TipidPC site. I decided to get this item. It’s a Crucial 16GB (8×2) CT2K8G3S160BM memory sticks. I managed to buy it yesterday for 4,400PHP (109USD). And after a series of meetings at work, I finally got it installed last night when I got home.

This is how it went…

First, I checked for a good tutorial, iFixit has a good step-by-step guide here.

I removed the back panel as indicated in the guide…
IMG_6261

Then I disconnected the battery. I had to do this slowly, wiggling it side to side (slowly) until it got loose.
IMG_6262

When I checked the memory (Samsung M471B5773DH0-CK0), I was surprised to see a “Made in the Philippines” sticker 🙂
IMG_6263

Removing the top RAM module is easy. Removing the second one will require some finesse and a little patience.
IMG_6265

Now that the “old” modules are out, I replaced it with the “new” ones.
IMG_6266

Time to close it and cross fingers when I press the power button.
IMG_6267

So I guess it worked!
IMG_6269

Just to be sure I ram memtest for Mac OS X as well. I used this site as reference. After running memtest, no red flags were raised. That’s good! 🙂

Screen Shot 2013-03-13 at 8.27.53 PM

References:

Advertisements

How to setup large partitions (>2TB RAID arrays) in CentOS 6.2 with a Supermicro Blade SBI-7125W-S6

We’re on the process of retiring our non-blade servers to free up space and reduce power usage. This move affects our 1U backups servers so we have to migrate it to blades as well.

I was setting-up a blade server as a replacement for one of our backup servers when I encountered a problem…

But before I get into that, here’s the specs of the blade:

  • Supermicro Blade SBI-7125W-S6 (circa 2008)
  • Intel Xeon E5405
  • 8 GB DDR2
  • LSI RAID 1078
  • 6 x 750 GB Seagate Momentus XT (ST750LX003)

The original plan was to set-up these drives as a RAID 5 array, about 3.5+ TB in size. The RAID controller can handle the size. So Rich, my colleague who did the initial setup of  the blade & the hard drives, did not encounter a problem.

I was cruising through the remote installation process until I hit a snag in the disk partitioning stage. The installer won’t use the entire space of the RAID array. It will only create partition(s) as long as the total size is 2TB.

I find it unusual because I’ve created bigger arrays before using software RAID and this problem did not manifest. After a little googling I found out that it has something to do with the limitations of Master Boot Record (or MBR). The solution is to use the GUID partition table (or GPT) as advised by this discussion.

I have two options at this point,

  1. go as originally planned, use GPT, and hope that the SBI-7125W-S6 can boot with it, or…
  2. create 2 arrays, one small (that will use MBR so the server can boot) and one large (that will use GPT  so that the disk space can be used in its entirety)

I tried option #1, it failed. The blade won’t boot at all. Primarily because the server has a BIOS, not an EFI.

And so I’m left with option #2…

The server has six drives. To implement option #2, my plan was to create this setup:

  • 2 drives at RAID 1 – will become /dev/sda, MBR, 750GB, main system drive (/)
  • 4 drives at RAID5 – will become /dev/sdb, GPT, 2.x+TB, will be mounted later

The LSI RAID 1078 can support this kind of setup, so I’m in luck. I decided to use RAID 1 & RAID 5 because redundancy is the primary concern, size is secondary.

This is where IPMI shines, I can reconfigure the RAID array remotely using the KVM console of IPMIView like I’m physically there at the data center 🙂 With the KVM access, I created 2 disk groups using the Web BIOS of the RAID controller.

Now that the arrays are up, I went through the CentOS 6 installation process again. The installer detected the 2 arrays, so no problem there. I configured /dev/sda with 3 partitions and  left /dev/sdb unconfigured (it can be configured easily later once CentOS is up).

In case you’re wondering, I added a 3.8GB LVM PV since this server will become a node of our ganeti cluster, to store VM snapshots.

The CentOS installation booted successfully this time. Now that the system’s working, it’s time to configure /dev/sdb.

I installed the EPEL repo first, then parted:

$ wget -c http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-6.noarch.rpm 
$ wget -c https://fedoraproject.org/static/0608B895.txt 
$ rpm -Uvh epel-release-6-5.noarch.rpm 
$ rpm --import 0608B895.txt 
$ yum install parted

Then, I configured /dev/sdb to use GPT, then formatted the whole partition as ext4:

$ parted /dev/sdb mklabel gpt 
$ parted /dev/sdb 
(parted) mkpart primary ext4 1 -1 
(parted) quit 
$ mkfs.ext4 -L data /dev/sdb

To mount the /dev/sdb, I need to find out its UUID first:

$ ls -lh /dev/disk/by-uuid/ | grep sdb 
lrwxrwxrwx. 1 root root 9 May 12 15:07 858844c3-6fd8-47e9-90a4-0d10c0914eb5 -> ../../sdb

Once I have the right UUID, I added this line in /etc/fstab. /dev/sdb will be mounted in /home/backup/log-dump/:

UUID=858844c3-6fd8-47e9-90a4-0d10c0914eb5 /home/backup/log-dump ext4 noatime,defaults 1 1

The partition is now ready to be mounted and used:

$ useradd backup
$ mkdir -p /home/backup/log-dump
$ mount /home/backup/log-dump
$ chown backup.backup -R /home/backup/log-dump

There, another problem solved. Thanks to the internet and the Linux community 🙂

After a few of days of copying files to this new array, this is what it looks like now:

/dev/sdb is almost used up already 🙂

References:

Release the beast! Xeon X5650 + RAID 0 Intel 710 SSD (2)

Well… not really a beast in today’s standards… but it’s a beast if we’re talking about processor blades for a 10-blade Supermicro enclosure. Supermicro released its Sandy Bridge Xeons but those are available for a 14-blade enclosure for now.

I wrote a post before about our test bed machine to figure out if SSD is a good route for us. We concluded that it is so we opted to go with the enterprise offerings of Intel, the 710 series.

The Intel 710 SSD is really not inexpensive. I decided to go with RAID 0 of 2 Intel 710 200GB because of 2 reasons: (1) to increase space & (2) to distribute writes for (much) longer write endurance.

Technical specs:

  • Supermicro SBI-7126T-S6
  • 2 x Intel Xeon X5650
  • 2 x Intel 710 200G SSD (RAID0)
  • 8 x 8 GB DDR3

The X5650 is a 6-core CPU but because of Hyper-threading, it’s registering as 12 cores.

The new blade was deployed as a ganeti node in one of our production clusters. So far, the 2 Xeon X5650s are doing well even with 8 CPU-hungry virtual machines.

I configured the server with growth in mind, thus the 64G memory. We’re not using that much RAM right now but that will change quickly once the ganeti node will host DB servers.

I was planning to perform some benchmarks once we deployed it but I haven’t got the chance… Hopefully, I’ll have the time when the next one arrives. 🙂

Ubuntu 10.04 amd64 on Lenovo Thinkpad E125, making the LAN, Wifi, Video and Sound to work

UPDATE: I gave the official release of Ubuntu 12.04 LTS another try and everything worked out of the box! Nice!!!

So I guess I’ll have to give Unity another chance… (so far, I find the HUD useful)

After almost 2 years since my laptop died on me, I decided to buy a replacement. I proposed the idea to my wife, June, and she approved (maybe because I’ve been using her laptop for the past 2 years 🙂 ).

I’m targeting a >= 12″ inch netbook since I’ve learned in the last 2 years that I don’t need that much processing power . My usage pattern can settle for an Atom or Brazos CPU since I’m using laptops mostly as a terminal, the grunt work are done in servers. Besides,  I don’t want to haul a 2kg+ brick.

There’s a plethora of netbooks from different OEMs nowadays so there is a lot to choose from. I narrowed my list to these two: Lenovo Thinkpad Edge E125 or HP DM1. It was a tough choice to make. But after scouring a few stores (in Cyberzone MegaMall) and weighing my options, I settled with the E125.

I chose E125 because of these reasons:

  • Keyboard is better, IMHO
  • 2 DIMM slots (I’m planning to upgrade it to 8GB in the future)
  • no OS pre-installed

I tried installing Ubuntu 11.10 and Ubuntu 12.04 beta but these 2 are not that stable for my needs when I tested it. My SBW Huawei dongle is experiencing intermittent connections for one and I’m not convinced to switch to Unity yet.

This is the rundown of how I found device drivers for the Lenovo Thinkpad Edge E125.

core packages:

sudo apt-get install build-essential linux-image-generic linux-headers-genericcdbs fakeroot dh-make debhelper debconf libstdc++6 dkms libqtgui4 wget execstack libelfg0 ia32-libs

lan:

Download the driver from the Qualcomm website (direct link).

mkdir -p ~/drivers/lan-atheros && cd ~/drivers/lan-atheros
mv ~/Downloads/alx-linux-v2.0.0.6.rar ./
sudo make && sudo make install
sudo modprobe alx

wifi:

sudo add-apt-repository ppa:lexical/hwe-wireless
sudo apt-get update
sudo apt-get install rtl8192ce-dkms
sudo modprobe r8192ce_pci

sound:

I encountered a problem with the sound configuration. Sounds are not playing in headphones if you plug one. It will just continue playing in the laptop speaker instead. I was able to fix it by upgrading ALSA to version 1.0.25, just use this guide on how to do it (just replace 1.0.23 with 1.0.25).

video:

I was able to install the latest ATI Catalyst drivers by following this guide. The installation was successful when I installed the driver manually.

card reader:

Download the driver from the Realtek website. Make sure that you switched to superuser (not sudo) when running make, it will fail if you don’t.

mkdir ~/drivers/cardreader-realtek/ && cd ~/drivers/cardreader-realtek/
mv ~/Downloads/rts_pstor.tar.bz2 ./
tar -xjvf rts_pstor.tar.bz2
cd rts_pstor
sudo su
make
make install
depmod
quit

additional packages:

sudo apt-get install vim-gtk ubuntu-restricted-extras pidgin-otr pidgin-libnotify openssh-server subversion rapidsvn

references:

a poke on SSD write endurance, Intel SSD 320 and iostat

The decision to move  to virtualization-using-KVM as our standard way of deploying servers  was really a success, given the cost savings for the past 2 years. The only downside is the performance hit in intensive disk IO workloads.

Some disk IO issues were already addressed in the application side (e.g. use cache, tmpfs, etc., smaller logs) but it’s apparent that if we want our deployment to be more “denser”, we have to find alternatives for our current storage back-end. Probably not a total replacement but more of a hybrid approach.

Solid State Drives is probably the the best option. It is cheaper compared against Storage Area Networks. I like the idea even more because it’s a simple drop-in replacement to our current SAS/SATA drives compared against maintaining additional hardware. Besides, my team does not have the luxury of “unlimited” budgets.

After a lengthy discussion with my MD, he approved to perform some tests first to see if SSD route is feasible for us. I chose to use 4 120GB Intel SSD 320s. The plan was to setup these 4 drives in a RAID 10 array and see if how how many virtual machines it can handle.

I chose Intel because it’s SSDs are more reliable among the brands in the market today. If performance is the primary requirement, I’d choose a SSD with a SandForce controller (maybe OCZ) but its not, its reliability.

The plan was to set-up a RAID  10 array of four 320s. But since our supplier can only provide us with 3 drives at the time we ordered, I decided to go with a RAID 0 array of 2 drives instead. I can’t wait for the 4th drive. (It turned out to be a good decision because the 4th drive arrived after 2 months!).

The Intel 320s write endurance, 160GB version, are rated at 15TB. My premise was, if we’re going to write 10GB of data per day, it will take almost 5 year to reach that limit. And in theory, if it’s configured in a striped RAID array, it will be a lot longer than 5 years.

It’s been over a month since I set-up the ganeti node with the SSD storage, so I decided to check and see its total writes.

The ganeti node has been running for 45 days. /dev/sda3 is the LVM volume configured for ganeti to use. The total blocks written is 5,811,473,792 at the rate of 1,468.85 blocks per second.  Since 1 block = 512 bytes, this translates to 2,975,474,581,504 bytes (2.9TB) at the rate of  752,051.2 bytes per second (752kB/s). The write rate translates to 64,977,223,680 bytes (64.5GB) of total writes per day! Uh oh…

64.5GB/day is remotely near from my premise of 10GB/day. At this rate, my RAID array will die in less than 2 years!

Uh oh indeed…

It turned out that 2 of the KVM instances that I assigned to this ganeti node are DB servers. We migrated it here a few weeks back to fix a high IO problem. A move that cost the Intel 320s a big percentage of its lifespan.

It seems that 64GB/per day is huge but apparently, it’s typical on our production servers. Here’s an iostat of one of our web servers:

I’m definitely NOT going to move this server to a SSD array anytime soon.

As a whole, the test ganeti node has been very helpful. I learned a few things that will be a big factor on what hardware we’re going to purchase.

Some points that my team must keep in mind if we’ll pursue the SSD route:

  • IO workload profiling is a must (must monitor this regularly as well)
  • leave write intensive VMs in HDD arrays or
  • consider Intel SSD 710 ??? (high write endurance = hefty price tag)

I didn’t leave our SSD array to die that fast of course. I migrated the DB servers to a different ganeti node and replaced it with some application servers.

It decreased the writes to 672.31 blocks/sec (344kB/s), more than half of its previous rate.

Eventually, the RAID array will die of course. For how long exactly, I don’t know, > 2 years? 🙂

my old keyboard …

I was browsing the pictures in my phone when I stumbled on these old photos, my old office keyboard.

Nothing’s special about it. It’s not even a high-tech keyboard (with fancy buttons and all). But… I’ve used this keyboard for 5 years! I had it replaced last year because one of the keys died.

5 years of wear-and-tear can make a keyboard this:

Markings of often used keys were already gone and the left ctrl key has a hole in it! Windows’ shortcut key remained unscathed (it’s a vestigial key in this case).

Oh well, I’ve written a lot of Perl and BASH code with this keyboard. One lesson that I learned with this, is that, I should clean my keyboard more often 🙂

2 years and 7 months… my laptop’s finally saying goodbye…

I turned on my laptop yesterday and I noticed something different, the display’s not that smooth and the icons are grainy… I thought that maybe it’s nothing, just a software glitch or something. Realization (that the proble is quite serious) came later when I tried to watch a video. The display’s full of lines and the colors are off! Uh-oh…

Did a reboot and crossed my fingers… nope, same thing..

Did a reboot and went to BIOS, and this is what I got…

Definitely not a software problem… 😦

Hopefully, it’s the LCD (LCD can still be replaced)…

I hooked it up to an external display, a Samsung TV (it has an S-Video port)… and this what I’ve got…

Same thing… *sigh*…

Definitely not a software problem… definitely not an LCD problem… The only thing’s left is the video card.

Well, I can’t replace those, video card’s embedded to the motherboard… this can’t be fixed by a mere clean-up

It’s definitely a goodbye … 😦