Master-Master Replication Example using MMM
23 Apr2007

Despite of my high load at work I decided to release mmm-1.0-pre2 today. There are some small, but critical fixes added and much more coming next week (or little bit later if mysqlconf will take more time than I think).

After the first alpha release I’ve received lots of emails, some messages in mmm-devel mail list and even some bug reports in Google Code bug tracking. One of the most asked things was documentation. 😉 So, I decided to write some posts in this blog (sorry to non-sql-related readers) and them compose some docs for final release using these posts and comments from readers. This post will be first in mmm-series and will describe how to use mmm in simple master+master scheme where one master accept write requests and both masters accept read requests. This post will provide you with detailed instructions about MySQL setup, permissions setting, mmm installation and configuration and cluster management.

Network Infrastructure

All my example configs in this article will be based on the following network infrastructure:

  • Web Server + MMM Monitoring Server – 192.168.1.1
  • MySQL Server db1 – 192.168.1.111
  • MySQL Server db2 – 192.168.1.112

All servers are connected to the same switched network.

Software Prerequisites

Before you’ll begin your setup, take a looks at the following list of prerequisites for each server in your cluster to be sure that you have all mentioned packages/modules/etc.

Each of mysql servers in the cluster should have iproute2 package to let mmm manage IP addresses on these servers with ip command. As for perl modules, you can run install.pl script and it’d say what do you need to add to your system before installation will be possible.

MySQL Servers Setup

First of all, you need to setup both of your MySQL servers to replicate data from each other. Example configs are following:

my.cnf at db1 should have following options:

1
2
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log

my.cnf at db2 should have following options:

1
2
server-id = 2
log_bin = /var/log/mysql/mysql-bin.log

Replication settings (db1):

1
2
3
4
5
mysql> grant replication slave on *.* to 'replication'@'%' identified by 'slave';
...
mysql> change master to master_host='192.168.1.112', master_port=3306, master_user='replication', master_password='slave';
...
mysql> slave start;

Replication settings (db2):

1
2
3
4
5
mysql> grant replication slave on *.* to 'replication'@'%' identified by 'slave';
...
mysql> change master to master_host='192.168.1.111', master_port=3306, master_user='replication', master_password='slave';
...
mysql> slave start;

After all these operations were made your servers will have SHOW SLAVE STATUS results like following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
mysql> show slave status\G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.1.112
                Master_User: replication
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.000026
        Read_Master_Log_Pos: 98
             Relay_Log_File: db1-relay-bin.000339
              Relay_Log_Pos: 235
      Relay_Master_Log_File: mysql-bin.000026
           Slave_IO_Running: Yes
          Slave_SQL_Running: Yes
            Replicate_Do_DB:
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 0
                 Last_Error:
               Skip_Counter: 0
        Exec_Master_Log_Pos: 98
            Relay_Log_Space: 235
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: 0
1 row in set (0.00 sec)

Setting up mmm agents

Each mysql server should have one mmmd_agent running on it. To set them up you will need to install mmm as following:

1
2
3
4
5
6
7
8
9
10
# mkdir ~/mmm
# cd ~/mmm
# wget http://mysql-master-master.googlecode.com/files/mmm-1.0-pre2.tar.bz2
...
# tar xzf mmm-1.0-pre2.tar.bz2
# cd mmm-1.0-pre2
# ./install.pl
...
Installation is done!
#

After mmm installation you’ll need to configure your agents.

db1 config /usr/local/mmm/etc/mmm_agent.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#
# Master-Master Manager config (agent)
#

# Debug mode
debug no

# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin

# Logging setup
log mydebug
    file /usr/local/mmm/var/mmm-debug.log
    level debug

log mytraps
    file /usr/local/mmm/var/mmm-traps.log
    level trap

# MMMD command socket tcp-port and ip
bind_port 9989

# Cluster interface
cluster_interface eth0

# Define current server id
this db1
mode master

# For masters
peer db2

# Cluster hosts addresses and access params
host db1
    ip 192.168.1.111
    port 3306
    user rep_agent
    password RepAgent

host db2
    ip 192.168.1.112
    port 3306
    user rep_agent
    password RepAgent

db1 config /usr/local/mmm/etc/mmm_agent.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#
# Master-Master Manager config (agent)
#

# Debug mode
debug no

# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin

# Logging setup
log mydebug
    file /usr/local/mmm/var/mmm-debug.log
    level debug

log mytraps
    file /usr/local/mmm/var/mmm-traps.log
    level trap

# MMMD command socket tcp-port and ip
bind_port 9989

# Cluster interface
cluster_interface eth0

# Define current server id
this db2
mode master

# For masters
peer db1

# Cluster hosts addresses and access params
host db1
    ip 192.168.1.111
    port 3306
    user rep_agent
    password RepAgent

host db2
    ip 192.168.1.112
    port 3306
    user rep_agent
    password RepAgent

Now you can run mmmd_agent on each server and your servers would be ready for management with mmm.

MMM Server Installation and Configuration

When everything is done on mysql servers, you are ready to set up monitoring node which of course could be combined with some web-server node or another services – dedicated hardware is not required. Before configuration step you’ll need to install mmm just as it was done on mysql servers. Then you’ll need to create configuration file for mmmd_mon program which would monitor your nodes. Config file for our example scheme could be like following:

Config file for monitoring node – /usr/local/mmm/etc/mmm_mon.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
#
# Master-Master Manager config (monitor)
#

# Debug mode
debug no

# Paths
pid_path /usr/local/mmm/var/mmmd.pid
status_path /usr/local/mmm/var/mmmd.status
bin_path /usr/local/mmm/bin

# Logging setup
log mydebug
    file /usr/local/mmm/var/mmm-debug.log
    level debug

log mytraps
    file /usr/local/mmm/var/mmm-traps.log
    level trap


# MMMD command socket tcp-port
bind_port 9988
agent_port 9989
monitor_ip 127.0.0.1

# Cluster interface
cluster_interface eth0

# Cluster hosts addresses and access params
host db1
    ip 192.168.1.111
    port 3306
    user rep_monitor
    password RepMonitor
    mode master
    peer db2

host db2
    ip 192.168.1.112
    port 3306
    user rep_monitor
    password RepMonitor
    mode master
    peer db1

#
# Define roles
#

active_master_role writer

# Mysql Reader role
role reader
    mode balanced
    servers db1, db2
    ip 192.168.1.201, 192.168.1.202

# Mysql Writer role
role writer
    mode exclusive
    servers db1, db2
    ip 192.168.1.200

#
# Checks parameters
#

# Ping checker
check ping
    check_period 1
    trap_period 5
    timeout 2

# Mysql checker
check mysql
    check_period 1
    trap_period  2
    timeout 2

# Mysql replication backlog checker
check rep_backlog
    check_period 5
    trap_period 10
    max_backlog 60
    timeout 2

# Mysql replication threads checker
check rep_threads
    check_period 1
    trap_period 5
    timeout 2

With this configuration file you will get 3 interface IP addresses used to “speak” with your cluster:

  • Writer IP (192.168.1.200) – this address should be used to send write requests to your server.
  • Reader IPs (192.168.1.201 and 192.168.1.202) – addresses for read-only requests.

Before you’ll start your monitoring part of the cluster, you need to be sure what mmm_mon will be able to connect to your servers with credentials from mmm_mon.conf file (run this command on one node and, if your replication was set up correctly (you’ve already tested it, right?), another server will get this statement by replication:

1
mysql> GRANT ALL PRIVILEGES on *.* to 'rep_monitor'@'192.168.1.1' identified by 'RepMonitor';

MMM Monitoring and Management Hints

When your configuration will be finished and mmmd_mon will be started, you obviously would need to take a look at mmm_control script which is small program dedicated to sending commands to mmmd_mon process and output results in nice format. To view its params info you can start it without any parameters. At this moment you can use following commands:

  • show – displays list of servers with status info and bound roles.
  • ping – sends ping command to local mmmd_mon daemon to check if it is running
  • set_online host_name/set_offline – changes statuses of specified server.
  • move_role role_name host_name – asks mmmd_mon to move specified role to specified host (useful for exclusive roles (like writer)

When your mmmd_mon script will be started first time, it will think what all servers were offline and now they came back. So, initial status of all servers will be set to AWAITING_RECOVERY and you’ll need to put both servers to ONLINE:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# mmm_control set_online db1
Config file: /usr/local/mmm/mmm_mon.conf
[2007-04-23 09:49:15]: Sending command 'SET_ONLINE(db1)' to 127.0.0.1
Command sent to monitoring host. Result: OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!

# mmm_control set_online db2
Config file: /usr/local/mmm/mmm_mon.conf
[2007-04-23 09:49:53]: Sending command 'SET_ONLINE(db2)' to 127.0.0.1
Command sent to monitoring host. Result: OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!

# mmm_control show
Config file: /usr/local/mmm/mmm_mon.conf
[2007-04-23 09:50:31]: Sending command 'PING()' to 127.0.0.1
Daemon is running!
Servers status:
  db1(192.168.1.111): master/ONLINE. Roles: reader(192.168.1.201;), writer(192.168.1.200;)
  db2(192.168.1.112): master/ONLINE. Roles: reader(192.168.1.202;)
#

So, that’s it. If you have any questions or suggestions, you can leave then in comments below or post them to mmm mail list.