PRM, MySQL HA and Pacemaker in the mix

PRM, short for “Percona Replication Manager” is a Resource Agent for Pacemaker written by Yves Trudeau. He added the handling of vips for master/slaves to the existing RA. (Original post and source of information can be found here)

While I was looking for a nice HA setup for my new MySQL machines I ran into Yves’ blog post. So I dove straight into it, made a test setup and started playing around (Barely no pacemaker knowledge or what so ever). After some patching and discussing on the blog post I finally got everything working like it should.

My setup started with 3 simple test machines. I installed Debian Squeeze on all machines, nothing special, just the standard install with the out of the box kernel. I wanted to run MySQL 5.5 and not compile or download it from (I like apt! ;)). So I added the repository to my /etc/apt/sources.list, installation instructions can be found here.

Add all 3 servers with their host name in each others’ /etc/hosts file, so they can be reached via simple host names, this way it’ll make your pacemaker config more readable. I installed mysql-server-5.5 from the dotdeb repository and installed pacemaker and corosync from the squeeze-backports (installation instructions squeeze-backport found here):

apt-get install mysql-server-5.5
apt-get -t squeeze-backports install corosync pacemaker

Versions at the time of writing this entry are:

  • MySQL: 5.5.19-1~dotdeb.1
  • Pacemaker: 1.1.6-2~bpo60+1
  • Corosync: 1.4.2-1~bpo60+1
  • Resource Agents: 1:3.9.2-3~bpo60+1

So now I have 3 servers running default MySQL Server 5.5 and the Corosync/Pacemaker setup.
Configure corosync as usual (set/copy authkey and change bindnetaddr) and restart it on all nodes (Default installation instructions can be found here. Only read the parts about CoroSync).

On all 3 MySQL servers add a replication client with:

INSERT INTO user (Host, User, Password, Select_priv, Reload_priv, Super_priv, Repl_slave_priv) VALUES ('192.168.1.%', 'rep_client', password('rep_passwd'), 'Y', 'Y', 'Y', 'Y'); FLUSH PRIVILEGES;

Replace the file “mysql” from the resource-agent package with the one Yves made, the file should be located at /usr/lib/ocf/resource.d/heartbeat/.
Yves’ version can be downloaded from his github, or directly with this link.

Continue when all nodes are online in your cluster.

The configuration I used for my setup can be downloaded here.

As you can see, I’m not using the writerOK attribute to decide where to put the write_vip resource. To be honest, the writer_vip resource assignment is currently a bit buggy and I couldn’t get it to work properly, so I used my own work-around, let Pacemaker decide and “Set the writer_vip where ms_MySQL has the role of ‘Master'”. That works like a charm.

I also assigned 2 of the 3 servers as a preferred master role, the 3rd server will only be used as a slave.

Start crm on one of the servers and input the configuration that I used. (Remember to change the IP addresses and host names to reflect your own setup)

verify and then commit your config, everything should come up now

One thought to “PRM, MySQL HA and Pacemaker in the mix”

  1. A huge thank you for posting this brief tutorial and your configuration. It finally helped me to get mysql replication in Pacemaker environment up and running after fiddling around with the old ocf:heartbeat:mysql script.

Leave a Reply

Your email address will not be published.