It’s really been a while since my last post. A lot has changed since then. I have been thinking on how can I start writing again for a while. I decided to start with something light.
I was assigned with a stock MBP 2012 in my new job. Core i5, 4GB… blah blah.. just stock specs. It was OK for a while but I started hitting the 4GB ceiling when I started playing around with VMs or if I’m processing 19k+ photos in Aperture.
Things just got sloooowww…. It’s a no brainer that I really need to add more RAM. Lots of it.
So I started checking online if where can I buy RAM with best bang-for-the-buck. I paid a visit to that trusty TipidPC site. I decided to get this item. It’s a Crucial 16GB (8×2) CT2K8G3S160BM memory sticks. I managed to buy it yesterday for 4,400PHP (109USD). And after a series of meetings at work, I finally got it installed last night when I got home.
This is how it went…
First, I checked for a good tutorial, iFixit has a good step-by-step guide here.
Just to be sure I ram memtest for Mac OS X as well. I used this site as reference. After running memtest, no red flags were raised. That’s good!
We had a problem with one of our server. It’s rsyncd is not responding anymore. It’s listening to the port but it’s not accepting requests.
Here’s what the log says:
[root@SERVER ~]# tail /var/log/messages Jun 21 13:19:46 SERVER xinetd: Swapping defaults Jun 21 13:19:46 SERVER xinetd: readjusting service amanda Jun 21 13:19:46 SERVER xinetd: bind failed (Address already in use (errno = 98)). service = rsync Jun 21 13:19:46 SERVER xinetd: Service rsync failed to start and is deactivated. Jun 21 13:19:46 SERVER xinetd: Reconfigured: new=0 old=1 dropped=0 (services) Jun 21 13:21:34 SERVER xinetd: Exiting... Jun 21 13:22:09 SERVER xinetd: bind failed (Address already in use (errno = 98)). service = rsync Jun 21 13:22:09 SERVER xinetd: Service rsync failed to start and is deactivated. Jun 21 13:22:09 SERVER xinetd: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Jun 21 13:22:09 SERVER xinetd: Started working: 1 available service
We tried stopping xinetd but there is still a process bound to the 873 port:
[root@SERVER ~]# service xinetd stop Stopping xinetd: [ OK ] [root@SERVER ~]# telnet localhost 873 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. ^] telnet> quit Connection closed.
If only we could determine what process is still bound to the 873 port…
Well, there’s an app for that:
lsof -i tcp:<port>
[root@SERVER ~]# lsof -i tcp:873 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME rpc.statd 1963 rpcuser 7u IPv4 4798 TCP *:rsync (LISTEN) [root@SERVER ~]# kill 1963 [root@SERVER ~]# kill 1963 -bash: kill: (1963) - No such process [root@SERVER ~]# telnet localhost 873 Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused telnet: Unable to connect to remote host: Connection refused
Now that the process is dead, we restarted xinetd…
[root@SERVER ~]# service xinetd start Starting xinetd: [ OK ] [root@SERVER ~]# tail /var/log/messages Jun 21 13:21:34 SERVER xinetd: Exiting... Jun 21 13:22:09 SERVER xinetd: bind failed (Address already in use (errno = 98)). service = rsync Jun 21 13:22:09 SERVER xinetd: Service rsync failed to start and is deactivated. Jun 21 13:22:09 SERVER xinetd: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Jun 21 13:22:09 SERVER xinetd: Started working: 1 available service Jun 21 13:23:06 SERVER xinetd: Exiting... Jun 21 13:25:18 SERVER rpc.statd: Caught signal 15, un-registering and exiting. Jun 21 13:25:18 SERVER portmap: connect from 127.0.0.1 to unset(status): request from unprivileged port Jun 21 13:25:31 SERVER xinetd: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Jun 21 13:25:31 SERVER xinetd: Started working: 2 available services
… and that solves the problem.
I wrote a post few weeks back that my MySQL NDB cluster was already running. This is a follow-up post on how I did it.
Before I dug in, I read some articles first on the best practices for MySQL Cluster installations. One of the sources that I’ve read is this quite helpful presentation.
The plan was to setup the cluster with 6 components:
- 2 Management nodes
- 2 MySQL nodes
- 2 NDB nodes
Based on the best practices, I only need 4 servers to accomplish this setup. With these tips in mind, this is the plan that I came up with:
- 2 VMs (2 CPUs, 4GB RAM, 20GB drives ) – will serve as MGM nodes and MySQL servers
- 2 Supermicro 1Us (4-core, 8GB RAM, RAID 5 of 4 140GB 10k rpm SAS) – will serve as NDB nodes
- all servers will be installed with a minimal installation of CentOS 6.2
- mm0 – 192.168.1.162 (MGM + MySQL)
- mm1 – 192.168.1.211 (MGM + MySQL)
- lbindb1 – 192.168.1.164 (NDB node)
- lbindb2 – 192.168.1.163 (NDB node)
That’s the plan, now to execute…
To install the packages, I ran these commands in the respective servers
mm0> rpm -Uhv --force MySQL-Cluster-server-gpl-7.2.5-1.el6.x86_64.rpm mm0> mkdir /var/lib/mysql-cluster mm1> rpm -Uhv --force MySQL-Cluster-server-gpl-7.2.5-1.el6.x86_64.rpm mm1> mkdir /var/lib/mysql-cluster lbindb1> rpm -Uhv --force MySQL-Cluster-server-gpl-7.2.5-1.el6.x86_64.rpm lbindb1> mkdir -p /var/lib/mysql-cluster/data lbindb2> rpm -Uhv --force MySQL-Cluster-server-gpl-7.2.5-1.el6.x86_64.rpm lbindb2> mkdir -p /var/lib/mysql-cluster/data
The mkdir commands will make sense in a bit…
My cluster uses these two configuration files:
/etc/my.cnf- used in the NDB nodes and MySQL servers (both mm and lbindb)
/var/lib/mysql-cluster/config.ini– used in the MGM nodes only (mm)
[mysqld] # Options for mysqld process: ndbcluster # run NDB storage engine ndb-connectstring=192.168.1.162,192.168.1.211 # location of management server [mysql_cluster] # Options for ndbd process: ndb-connectstring=192.168.1.162,192.168.1.211 # location of management server
[ndbd default] # Options affecting ndbd processes on all data nodes: NoOfReplicas=2 # Setting this to 1 for now, 3 ndb nodes DataMemory=1024M # How much memory to allocate for data storage IndexMemory=512M DiskPageBufferMemory=1048M SharedGlobalMemory=384M MaxNoOfExecutionThreads=4 RedoBuffer=32M FragmentLogFileSize=256M NoOfFragmentLogFiles=6 [ndb_mgmd] # Management process options: NodeId=1 HostName=192.168.1.162 # Hostname or IP address of MGM node DataDir=/var/lib/mysql-cluster # Directory for MGM node log files [ndb_mgmd] # Management process options: NodeId=2 HostName=192.168.1.211 # Hostname or IP address of MGM node DataDir=/var/lib/mysql-cluster # Directory for MGM node log files [ndbd] # lbindb1 HostName=192.168.1.164 # Hostname or IP address DataDir=/var/lib/mysql-cluster/data # Directory for this data node's data files [ndbd] # lbindb2 HostName=192.168.1.163 # Hostname or IP address DataDir=/var/lib/mysql-cluster/data # Directory for this data node's data files # SQL nodes [mysqld] HostName=192.168.1.162 [mysqld] HostName=192.168.1.211
Once the configuration files are in place, I started the cluster with these commands (NOTE: Make sure that the firewall was properly configured first):
mm0> ndb_mgmd --ndb-nodeid=1 -f /var/lib/mysql-cluster/config.ini mm0> service mysql start mm1> ndb_mgmd --ndb-nodeid=2 -f /var/lib/mysql-cluster/config.ini mm1> service mysql start lbindb1> ndbmtd lbindb2> ndbmtd
To verify if my cluster is really running, I logged-in into one of the MGM nodes and ran
ndb_mgm like this:
I was able to set it this up a few weeks back. Unfortunately, I haven’t had the chance to really test it with our ETL scripts… I was occupied with other responsibilities…
Thinking about it now, I may have to scrap the whole cluster and install a MySQL with InnoDB + lots of RAM! hmmm… Maybe I’ll benchmark it first…
We’re on the process of retiring our non-blade servers to free up space and reduce power usage. This move affects our 1U backups servers so we have to migrate it to blades as well.
I was setting-up a blade server as a replacement for one of our backup servers when I encountered a problem…
But before I get into that, here’s the specs of the blade:
- Supermicro Blade SBI-7125W-S6 (circa 2008)
- Intel Xeon E5405
- 8 GB DDR2
- LSI RAID 1078
- 6 x 750 GB Seagate Momentus XT (ST750LX003)
The original plan was to set-up these drives as a RAID 5 array, about 3.5+ TB in size. The RAID controller can handle the size. So Rich, my colleague who did the initial setup of the blade & the hard drives, did not encounter a problem.
I was cruising through the remote installation process until I hit a snag in the disk partitioning stage. The installer won’t use the entire space of the RAID array. It will only create partition(s) as long as the total size is 2TB.
I find it unusual because I’ve created bigger arrays before using software RAID and this problem did not manifest. After a little googling I found out that it has something to do with the limitations of Master Boot Record (or MBR). The solution is to use the GUID partition table (or GPT) as advised by this discussion.
I have two options at this point,
- go as originally planned, use GPT, and hope that the SBI-7125W-S6 can boot with it, or…
- create 2 arrays, one small (that will use MBR so the server can boot) and one large (that will use GPT so that the disk space can be used in its entirety)
I tried option #1, it failed. The blade won’t boot at all. Primarily because the server has a BIOS, not an EFI.
And so I’m left with option #2…
The server has six drives. To implement option #2, my plan was to create this setup:
- 2 drives at RAID 1 – will become /dev/sda, MBR, 750GB, main system drive (/)
- 4 drives at RAID5 – will become /dev/sdb, GPT, 2.x+TB, will be mounted later
The LSI RAID 1078 can support this kind of setup, so I’m in luck. I decided to use RAID 1 & RAID 5 because redundancy is the primary concern, size is secondary.
This is where IPMI shines, I can reconfigure the RAID array remotely using the KVM console of IPMIView like I’m physically there at the data center With the KVM access, I created 2 disk groups using the Web BIOS of the RAID controller.
Now that the arrays are up, I went through the CentOS 6 installation process again. The installer detected the 2 arrays, so no problem there. I configured /dev/sda with 3 partitions and left /dev/sdb unconfigured (it can be configured easily later once CentOS is up).
In case you’re wondering, I added a 3.8GB LVM PV since this server will become a node of our ganeti cluster, to store VM snapshots.
The CentOS installation booted successfully this time. Now that the system’s working, it’s time to configure /dev/sdb.
I installed the EPEL repo first, then parted:
$ wget -c http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-6.noarch.rpm $ wget -c https://fedoraproject.org/static/0608B895.txt $ rpm -Uvh epel-release-6-5.noarch.rpm $ rpm --import 0608B895.txt $ yum install parted
Then, I configured /dev/sdb to use GPT, then formatted the whole partition as ext4:
$ parted /dev/sdb mklabel gpt $ parted /dev/sdb (parted) mkpart primary ext4 1 -1 (parted) quit $ mkfs.ext4 -L data /dev/sdb
To mount the /dev/sdb, I need to find out its UUID first:
$ ls -lh /dev/disk/by-uuid/ | grep sdb lrwxrwxrwx. 1 root root 9 May 12 15:07 858844c3-6fd8-47e9-90a4-0d10c0914eb5 -> ../../sdb
Once I have the right UUID, I added this line in /etc/fstab. /dev/sdb will be mounted in /home/backup/log-dump/:
UUID=858844c3-6fd8-47e9-90a4-0d10c0914eb5 /home/backup/log-dump ext4 noatime,defaults 1 1
The partition is now ready to be mounted and used:
$ useradd backup $ mkdir -p /home/backup/log-dump $ mount /home/backup/log-dump $ chown backup.backup -R /home/backup/log-dump
There, another problem solved. Thanks to the internet and the Linux community
After a few of days of copying files to this new array, this is what it looks like now:
/dev/sdb is almost used up already
Well… not really a beast in today’s standards… but it’s a beast if we’re talking about processor blades for a 10-blade Supermicro enclosure. Supermicro released its Sandy Bridge Xeons but those are available for a 14-blade enclosure for now.
I wrote a post before about our test bed machine to figure out if SSD is a good route for us. We concluded that it is so we opted to go with the enterprise offerings of Intel, the 710 series.
The Intel 710 SSD is really not inexpensive. I decided to go with RAID 0 of 2 Intel 710 200GB because of 2 reasons: (1) to increase space & (2) to distribute writes for (much) longer write endurance.
- Supermicro SBI-7126T-S6
- 2 x Intel Xeon X5650
- 2 x Intel 710 200G SSD (RAID0)
- 8 x 8 GB DDR3
The X5650 is a 6-core CPU but because of Hyper-threading, it’s registering as 12 cores.
The new blade was deployed as a ganeti node in one of our production clusters. So far, the 2 Xeon X5650s are doing well even with 8 CPU-hungry virtual machines.
I configured the server with growth in mind, thus the 64G memory. We’re not using that much RAM right now but that will change quickly once the ganeti node will host DB servers.
I was planning to perform some benchmarks once we deployed it but I haven’t got the chance… Hopefully, I’ll have the time when the next one arrives.
UPDATE: I gave the official release of Ubuntu 12.04 LTS another try and everything worked out of the box! Nice!!!
So I guess I’ll have to give Unity another chance… (so far, I find the HUD useful)
After almost 2 years since my laptop died on me, I decided to buy a replacement. I proposed the idea to my wife, June, and she approved (maybe because I’ve been using her laptop for the past 2 years ).
I’m targeting a >= 12″ inch netbook since I’ve learned in the last 2 years that I don’t need that much processing power . My usage pattern can settle for an Atom or Brazos CPU since I’m using laptops mostly as a terminal, the grunt work are done in servers. Besides, I don’t want to haul a 2kg+ brick.
There’s a plethora of netbooks from different OEMs nowadays so there is a lot to choose from. I narrowed my list to these two: Lenovo Thinkpad Edge E125 or HP DM1. It was a tough choice to make. But after scouring a few stores (in Cyberzone MegaMall) and weighing my options, I settled with the E125.
I chose E125 because of these reasons:
- Keyboard is better, IMHO
- 2 DIMM slots (I’m planning to upgrade it to 8GB in the future)
- no OS pre-installed
I tried installing Ubuntu 11.10 and Ubuntu 12.04 beta but these 2 are not that stable for my needs when I tested it. My SBW Huawei dongle is experiencing intermittent connections for one and I’m not convinced to switch to Unity yet.
This is the rundown of how I found device drivers for the Lenovo Thinkpad Edge E125.
sudo apt-get install build-essential linux-image-generic linux-headers-genericcdbs fakeroot dh-make debhelper debconf libstdc++6 dkms libqtgui4 wget execstack libelfg0 ia32-libs
Download the driver from the Qualcomm website (direct link).
mkdir -p ~/drivers/lan-atheros && cd ~/drivers/lan-atheros mv ~/Downloads/alx-linux-v18.104.22.168.rar ./ sudo make && sudo make install sudo modprobe alx
sudo add-apt-repository ppa:lexical/hwe-wireless sudo apt-get update sudo apt-get install rtl8192ce-dkms sudo modprobe r8192ce_pci
I encountered a problem with the sound configuration. Sounds are not playing in headphones if you plug one. It will just continue playing in the laptop speaker instead. I was able to fix it by upgrading ALSA to version 1.0.25, just use this guide on how to do it (just replace 1.0.23 with 1.0.25).
I was able to install the latest ATI Catalyst drivers by following this guide. The installation was successful when I installed the driver manually.
Download the driver from the Realtek website. Make sure that you switched to superuser (not sudo) when running make, it will fail if you don’t.
mkdir ~/drivers/cardreader-realtek/ && cd ~/drivers/cardreader-realtek/ mv ~/Downloads/rts_pstor.tar.bz2 ./ tar -xjvf rts_pstor.tar.bz2 cd rts_pstor sudo su make make install depmod quit
sudo apt-get install vim-gtk ubuntu-restricted-extras pidgin-otr pidgin-libnotify openssh-server subversion rapidsvn
I’ve been working on how to deploy our cluster for the past 2 days already. It’s nice to see that it’s running now given that I’ve been reading the MySQL manual for 2 days now…
I’ll create a detailed post on how I did it when I have more time.
Here are my preliminary notes so far:
- only 2 files are needed, MySQL-Cluster-client-gpl-7.2.5-1.el6.x86_64.rpm & MySQL-Cluster-server-gpl-7.2.5-1.el6.x86_64.rpm
--forceis required to install the server package in CentOS 6.2
- make sure that IPs are static and firewalls are setup
- total ndb nodes must be multiples of NoOfReplicas (1 or 2)
- if mgm > 1, all mgms must be up first before you can issue commands (use –nowait-nodes to override)
- for ndb nodes, ensure that the DataDir exists
I’m just savoring the fruits of my labor… for this is only the beginning… *sigh*