Showing posts with label recipes. Show all posts
Showing posts with label recipes. Show all posts

Tuesday, April 02, 2019

dbdeployer cookbook - Advanced techniques

In the previous post about the dbdeployer recipes we saw the basics of using the cookbook command and the simpler tutorials that the recipes offer.

Here we will see some more advanced techniques, and more demanding examples.


We saw that the recipe for a single deployment would get a NOTFOUND when no versions were available, or the highest MySQL version when one was found.

$ dbdeployer cookbook  show single | grep version=
version=$1
[ -z "$version" ] && version=8.0.16

But what if we want the latest Percona Server or MariaDB for this recipe? One solution would be to run the script with an argument, but we can ask dbdeployer to find the most recent version for a given flavor and use it in our recipe:

$ dbdeployer cookbook  show single --flavor=percona | grep version=
version=$1
[ -z "$version" ] && version=ps8.0.15

$ dbdeployer cookbook  show single --flavor=pxc | grep version=
version=$1
[ -z "$version" ] && version=pxc5.7.25

$ dbdeployer cookbook  show single --flavor=mariadb | grep version=
version=$1
[ -z "$version" ] && version=ma10.4.3

This works for all the recipes that don’t require a given flavor. When one is indicated (see dbdeployer cookbook list) you can override it using --flavor, but do that at your own risk. Running the ndb recipe using pxc flavor won’t produce anything usable.


Replication between sandboxes

When I proposed dbdeployer support for NDB, the immediate reaction was that this was good to test cluster-to-cluster replication. Although I did plenty of such topologies in one of my previous jobs, I had limited experience replicating between single or composite sandboxes. Thus, I started thinking about how to do it. In the old MySQL-Sandbox, I had an option --slaveof that allowed a single sandbox to replicate from an existing one. I did not implement the same thing in dbdeployer, because that solution looked limited, and only useful in a few scenarios.

I wanted something more dynamic, and initially I thought of creating a grandiose scheme, involving custom templates and user-defined fillers. While I may end up doing that some day, I quickly realized that it was overkill for this purpose, and that the sandboxes had already all the information needed to replicate from and to every other sandbox. I just had to expose the data in such a way that it can be used to plug one sandbox to the other.

Now every sandbox has a script named replicate_from, and a companion script called metadata. Using a combination of the two (in fact, replicate_from on the would-be replica calls metadata from the donor) we can quickly define the replication command needed for most situations.


Replication between single sandboxes

Before we tackle the most complex one, let’s demonstrate that the system works with a simple case.

There is a recipe named replication_between_single that creates a file named, aptly, ./recipes/replication-between-single.sh.

If you run it, you will see something similar to the following:

$ ./recipes/replication-between-single.sh  5.7.25
+ dbdeployer deploy single 5.7.25 --master --gtid --sandbox-directory=msb_5_7_25_1 --port-as-server-id
Database installed in $HOME/sandboxes/msb_5_7_25_1
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ dbdeployer deploy single 5.7.25 --master --gtid --sandbox-directory=msb_5_7_25_2 --port-as-server-id
Database installed in $HOME/sandboxes/msb_5_7_25_2
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ dbdeployer sandboxes --full-info
.--------------.--------.---------.---------------.--------.-------.--------.
|     name     |  type  | version |     ports     | flavor | nodes | locked |
+--------------+--------+---------+---------------+--------+-------+--------+
| msb_5_7_25_1 | single | 5.7.25  | [5725 ]       | mysql  |     0 |        |
| msb_5_7_25_2 | single | 5.7.25  | [5726 ]       | mysql  |     0 |        |
'--------------'--------'---------'---------------'--------'-------'--------'
0
+ $HOME/sandboxes/msb_5_7_25_1/replicate_from msb_5_7_25_2
Connecting to $HOME/sandboxes/msb_5_7_25_2
--------------
CHANGE MASTER TO master_host="127.0.0.1",
master_port=5726,
master_user="rsandbox",
master_password="rsandbox"
, master_log_file="mysql-bin.000001", master_log_pos=4089
--------------

--------------
start slave
--------------

              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 4089
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
          Exec_Master_Log_Pos: 4089
           Retrieved_Gtid_Set:
            Executed_Gtid_Set: 00005725-0000-0000-0000-000000005725:1-16
                Auto_Position: 0
0
# Inserting data in msb_5_7_25_2
+ $HOME/sandboxes/msb_5_7_25_2/use -e 'create table if not exists test.t1 (id int not null primary key, server_id int )'
+ $HOME/sandboxes/msb_5_7_25_2/use -e 'insert into test.t1 values (1, @@server_id)'
# Retrieving data from msb_5_7_25_1
+ $HOME/sandboxes/msb_5_7_25_1/use -e 'select *, @@port from test.t1'
+----+-----------+--------+
| id | server_id | @@port |
+----+-----------+--------+
|  1 |      5726 |   5725 |
+----+-----------+--------+

The script deploys two sandboxes of the chosen version, using different directory names (dbdeployer takes care of choosing a free port) and then starts replication between the two using $SANDBOX1/replicate_from $SANDBOX2. Then a quick test shows that the data created in a sandbox can be retrieved in the other.


Replication between group replication clusters

The method used to replicate between two group replications is similar to the one seen for single sandboxes. The script replicate_from on the group top directory delegates the replication task to its first node, which points to the second group.

$ ./recipes/replication-between-groups.sh  5.7.25
+ dbdeployer deploy replication 5.7.25 --topology=group --concurrent --port-as-server-id --sandbox-directory=group_5_7_25_1
[...]
+ dbdeployer deploy replication 5.7.25 --topology=group --concurrent --port-as-server-id --sandbox-directory=group_5_7_25_2
[...]
+ dbdeployer sandboxes --full-info
.----------------.---------------------.---------.----------------------------------------.--------.-------.--------.
|      name      |        type         | version |                 ports                  | flavor | nodes | locked |
+----------------+---------------------+---------+----------------------------------------+--------+-------+--------+
| group_5_7_25_1 | group-multi-primary | 5.7.25  | [20226 20351 20227 20352 20228 20353 ] | mysql  |     3 |        |
| group_5_7_25_2 | group-multi-primary | 5.7.25  | [20229 20354 20230 20355 20231 20356 ] | mysql  |     3 |        |
'----------------'---------------------'---------'----------------------------------------'--------'-------'--------'
0
+ $HOME/sandboxes/group_5_7_25_1/replicate_from group_5_7_25_2
Connecting to $HOME/sandboxes/group_5_7_25_2/node1
--------------
CHANGE MASTER TO master_host="127.0.0.1",
master_port=20229,
master_user="rsandbox",
master_password="rsandbox"
, master_log_file="mysql-bin.000001", master_log_pos=1082
--------------

--------------
start slave
--------------

              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 1082
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
          Exec_Master_Log_Pos: 1082
           Retrieved_Gtid_Set:
            Executed_Gtid_Set: 00020225-bbbb-cccc-dddd-eeeeeeeeeeee:1-3
                Auto_Position: 0
0
# Inserting data in group_5_7_25_2 node1
+ $HOME/sandboxes/group_5_7_25_2/n1 -e 'create table if not exists test.t1 (id int not null primary key, server_id int )'
+ $HOME/sandboxes/group_5_7_25_2/n1 -e 'insert into test.t1 values (1, @@server_id)'
# Retrieving data from one of group_5_7_25_1 nodes
# At this point, the data was replicated twice
+ $HOME/sandboxes/group_5_7_25_1/n2 -e 'select *, @@port from test.t1'
+----+-----------+--------+
| id | server_id | @@port |
+----+-----------+--------+
|  1 |     20229 |  20227 |
+----+-----------+--------+

The interesting thing about this recipe is that the sandboxes are created using the option --port-as-server-id. While it was used also in the replication between single sandboxes as an excess of caution, in this recipe, and in all the recipes involving compound sandboxes, it is a necessity, as the replication would fail if primary and replica servers have the same server_id.

All the work is done by the replicate_from script, which knows how to check whether the target is a single sandbox or a composite one, and where to find the primary server.

Using a similar method, we can run more recipes on the same tune.


Replication between different things

I won’t reproduce the output of all recipes here. I will just mention what every recipe needs to prepare to ensure a positive outcome.

  • Replication between NDB clusters. Nothing special here, except making sure to use a MySQL Cluster tarball. If you don’t dbdeployer will detect it and refuse the installation. For the rest, it’s like replication between groups.
  • Replication between master/slave. This is a bit trickier, because the replication data comes to a master, and if we want to propagate to its slaves we need to activate log-slave-update. The recipe shows how to do it.
  • Replication between group and master/slave. In addition to the trick mentioned in the previous recipe, we need to make sure that the master/slave deployment is using GTID.
  • Replication between master/slave and group. See the previous one.
  • Replication between group and single (and vice versa). We just need to make sure the single sandbox has GTID enabled.

Replication between different versions

This is a simple recipe that comes from a feature request. All you need to do is make sure that the version on the master is lower than the one on the slaves. The recipe script replication-multi-versions.sh, looks for tarballs of 5.6, 5.7, and 8.0, but you can start it using three versions that you’d like. For example:

./recipes/replication-multi-versions.sh 5.7.23 5.7.24 5.7.25

The first version will be used as the master.


Circular replication

I didn’t want to do this, as I consider ring replication to be weak and difficult to handle. I stated that much in the feature request and in the list of dbdeployer features. But then I saw that with the latest enhancements it was so easy, that I had to at least make a recipe for it. And then you have it. recipes/circular-replication.sh does what it promises, but the burden of maintenance is still on the user’s shoulders. I suggest looking at it, and then forgetting it.


Upgrade from MySQL 5.5 to 8.0 (through 5.6 and 5.7)

This is one of the most advanced recipes. To enjoy it, you need to have expanded tarballs from 5.5, 5.6, 5.7, and 8.0.

Provided that you do, running this script will do the following:

  1. deploy MySQL 5.5
  2. Create a table upgrade_log and insert some data.
  3. deploy MySQL 5.6
  4. run mysql_upgrade (through dbdeployer)
  5. Add data to the log table
  6. deploy MySQL 5.7
  7. run mysql_upgrade again
  8. add data to the log table
  9. deploy MySQL 8.0
  10. run mysql_upgrade for the last time
  11. Show the data from the table

Here’s a full transcript of the operation. It’s interesting to see how the upgrade procedure has changed from older versions to current ones.


$ ./recipes/upgrade.sh

# ****************************************************************************
# Upgrading from 5.5.53 to 5.6.41
# ****************************************************************************
+ dbdeployer deploy single 5.5.53 --master
Database installed in $HOME/sandboxes/msb_5_5_53
run 'dbdeployer usage single' for basic instructions'
.. sandbox server started
0
+ dbdeployer deploy single 5.6.41 --master
Database installed in $HOME/sandboxes/msb_5_6_41
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ $HOME/sandboxes/msb_5_5_53/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_5_53/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_5_53 msb_5_6_41
stop $HOME/sandboxes/msb_5_5_53
stop $HOME/sandboxes/msb_5_6_41
Data directory msb_5_5_53/data moved to msb_5_6_41/data
. sandbox server started
Looking for 'mysql' as: $HOME/opt/mysql/5.6.41/bin/mysql
Looking for 'mysqlcheck' as: $HOME/opt/mysql/5.6.41/bin/mysqlcheck
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
mysql.columns_priv                                 OK
mysql.db                                           OK
mysql.event                                        OK
mysql.func                                         OK
mysql.general_log                                  OK
mysql.help_category                                OK
mysql.help_keyword                                 OK
mysql.help_relation                                OK
mysql.help_topic                                   OK
mysql.host                                         OK
mysql.ndb_binlog_index                             OK
mysql.plugin                                       OK
mysql.proc                                         OK
mysql.procs_priv                                   OK
mysql.proxies_priv                                 OK
mysql.servers                                      OK
mysql.slow_log                                     OK
mysql.tables_priv                                  OK
mysql.time_zone                                    OK
mysql.time_zone_leap_second                        OK
mysql.time_zone_name                               OK
mysql.time_zone_transition                         OK
mysql.time_zone_transition_type                    OK
mysql.user                                         OK
Running 'mysql_fix_privilege_tables'...
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
test.upgrade_log                                   OK
OK

The data directory from msb_5_6_41/data is preserved in msb_5_6_41/data-msb_5_6_41
The data directory from msb_5_5_53/data is now used in msb_5_6_41/data
msb_5_5_53 is not operational and can be deleted
+ dbdeployer delete msb_5_5_53
List of deployed sandboxes:
$HOME/sandboxes/msb_5_5_53
Running $HOME/sandboxes/msb_5_5_53/stop
Running rm -rf $HOME/sandboxes/msb_5_5_53
Directory $HOME/sandboxes/msb_5_5_53 deleted
+ $HOME/sandboxes/msb_5_6_41/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_5_6_41/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers       | urole    | ts                  |
+----+-----------+------------+----------+---------------------+
|  1 |      5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
|  2 |      5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
+----+-----------+------------+----------+---------------------+

# ****************************************************************************
# The upgraded database is now upgrading from 5.6.41 to 5.7.25
# ****************************************************************************
+ dbdeployer deploy single 5.7.25 --master
Database installed in $HOME/sandboxes/msb_5_7_25
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ $HOME/sandboxes/msb_5_6_41/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_6_41/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_6_41 msb_5_7_25
stop $HOME/sandboxes/msb_5_6_41
stop $HOME/sandboxes/msb_5_7_25
Data directory msb_5_6_41/data moved to msb_5_7_25/data
.. sandbox server started
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Checking system database.
mysql.columns_priv                                 OK
mysql.db                                           OK
mysql.engine_cost                                  OK
mysql.event                                        OK
mysql.func                                         OK
mysql.general_log                                  OK
mysql.gtid_executed                                OK
mysql.help_category                                OK
mysql.help_keyword                                 OK
mysql.help_relation                                OK
mysql.help_topic                                   OK
mysql.host                                         OK
mysql.innodb_index_stats                           OK
mysql.innodb_table_stats                           OK
mysql.ndb_binlog_index                             OK
mysql.plugin                                       OK
mysql.proc                                         OK
mysql.procs_priv                                   OK
mysql.proxies_priv                                 OK
mysql.server_cost                                  OK
mysql.servers                                      OK
mysql.slave_master_info                            OK
mysql.slave_relay_log_info                         OK
mysql.slave_worker_info                            OK
mysql.slow_log                                     OK
mysql.tables_priv                                  OK
mysql.time_zone                                    OK
mysql.time_zone_leap_second                        OK
mysql.time_zone_name                               OK
mysql.time_zone_transition                         OK
mysql.time_zone_transition_type                    OK
mysql.user                                         OK
Upgrading the sys schema.
Checking databases.
sys.sys_config                                     OK
test.upgrade_log
error    : Table rebuild required. Please do "ALTER TABLE `upgrade_log` FORCE" or dump/reload to fix it!

Repairing tables
`test`.`upgrade_log`
Running  : ALTER TABLE `test`.`upgrade_log` FORCE
status   : OK
Upgrade process completed successfully.
Checking if update is needed.

The data directory from msb_5_7_25/data is preserved in msb_5_7_25/data-msb_5_7_25
The data directory from msb_5_6_41/data is now used in msb_5_7_25/data
msb_5_6_41 is not operational and can be deleted
+ dbdeployer delete msb_5_6_41
List of deployed sandboxes:
$HOME/sandboxes/msb_5_6_41
Running $HOME/sandboxes/msb_5_6_41/stop
Running rm -rf $HOME/sandboxes/msb_5_6_41
Directory $HOME/sandboxes/msb_5_6_41 deleted
+ $HOME/sandboxes/msb_5_7_25/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_5_7_25/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers       | urole    | ts                  |
+----+-----------+------------+----------+---------------------+
|  1 |      5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
|  2 |      5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
|  3 |      5641 | 5.6.41-log | original | 2019-04-01 20:27:51 |
|  4 |      5725 | 5.7.25-log | upgraded | 2019-04-01 20:28:01 |
+----+-----------+------------+----------+---------------------+

# ****************************************************************************
# The further upgraded database is now upgrading from 5.7.25 to 8.0.15
# ****************************************************************************
+ dbdeployer deploy single 8.0.15 --master
Database installed in $HOME/sandboxes/msb_8_0_15
run 'dbdeployer usage single' for basic instructions'
.. sandbox server started
0
+ $HOME/sandboxes/msb_5_7_25/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_7_25/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_7_25 msb_8_0_15
stop $HOME/sandboxes/msb_5_7_25
Attempting normal termination --- kill -15 10357
stop $HOME/sandboxes/msb_8_0_15
Data directory msb_5_7_25/data moved to msb_8_0_15/data
... sandbox server started
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Upgrading system table data.
Checking system database.
mysql.columns_priv                                 OK
mysql.component                                    OK
mysql.db                                           OK
mysql.default_roles                                OK
mysql.engine_cost                                  OK
mysql.func                                         OK
mysql.general_log                                  OK
mysql.global_grants                                OK
mysql.gtid_executed                                OK
mysql.help_category                                OK
mysql.help_keyword                                 OK
mysql.help_relation                                OK
mysql.help_topic                                   OK
mysql.host                                         OK
mysql.innodb_index_stats                           OK
mysql.innodb_table_stats                           OK
mysql.ndb_binlog_index                             OK
mysql.password_history                             OK
mysql.plugin                                       OK
mysql.procs_priv                                   OK
mysql.proxies_priv                                 OK
mysql.role_edges                                   OK
mysql.server_cost                                  OK
mysql.servers                                      OK
mysql.slave_master_info                            OK
mysql.slave_relay_log_info                         OK
mysql.slave_worker_info                            OK
mysql.slow_log                                     OK
mysql.tables_priv                                  OK
mysql.time_zone                                    OK
mysql.time_zone_leap_second                        OK
mysql.time_zone_name                               OK
mysql.time_zone_transition                         OK
mysql.time_zone_transition_type                    OK
mysql.user                                         OK
Found outdated sys schema version 1.5.1.
Upgrading the sys schema.
Checking databases.
sys.sys_config                                     OK
test.upgrade_log                                   OK
Upgrade process completed successfully.
Checking if update is needed.

The data directory from msb_8_0_15/data is preserved in msb_8_0_15/data-msb_8_0_15
The data directory from msb_5_7_25/data is now used in msb_8_0_15/data
msb_5_7_25 is not operational and can be deleted
+ dbdeployer delete msb_5_7_25
List of deployed sandboxes:
$HOME/sandboxes/msb_5_7_25
Running $HOME/sandboxes/msb_5_7_25/stop
Running rm -rf $HOME/sandboxes/msb_5_7_25
Directory $HOME/sandboxes/msb_5_7_25 deleted
+ $HOME/sandboxes/msb_8_0_15/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_8_0_15/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers       | urole    | ts                  |
+----+-----------+------------+----------+---------------------+
|  1 |      5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
|  2 |      5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
|  3 |      5641 | 5.6.41-log | original | 2019-04-01 20:27:51 |
|  4 |      5725 | 5.7.25-log | upgraded | 2019-04-01 20:28:01 |
|  5 |      5725 | 5.7.25-log | original | 2019-04-01 20:28:07 |
|  6 |      8015 | 8.0.15     | upgraded | 2019-04-01 20:28:20 |
+----+-----------+------------+----------+---------------------+

What else can we do?

The replication recipes seen so far use the same principles. The method used in these recipes doesn’t work for all-masters and fan-in replication, because mixing named channels and nameless ones is not allowed. Also, there are things that don’t respond to replication commands at all, like TiDB. But it should be easy to enhance the current scripts (or to add some more specialized ones) that will include also these exceptions. Given the recent wave of collaboration, I expect it will happen relatively soon.

Monday, April 22, 2013

Installing and administering Tungsten Replicator - Part 2 : advanced

Switching roles

To get a taste of the power of Tungsten Replicator, we will show how to switch roles. This is a controlled operation (as opposed to fail-over), where we can decide when to switch and which nodes are involved.

In our topology, host1 is the master, and we have three slaves. We can either ask for a switch and let the script select the first available slave, or tell the script which slave should be promoted. The script will show us the steps needed to perform the operation.

IMPORTANT! Please note that this operation is not risk free. Tungsten replicator is a simple replication system, not a complete management tool like Continuent Tungsten. WIth the replicator, you must make sure that the applications have stopped writing to the master before starting the switch, and then you should address the application to the new master when the operation is done.

$ cookbook/switch host2
# Determining current roles
host1 master
host2 slave
host3 slave
host4 slave
# Will promote host2 to be the new master server
# Waiting for slaves to catch up and pausing replication
trepctl -host host2 wait -applied 5382
trepctl -host host2 offline
trepctl -host host3 wait -applied 5382
trepctl -host host3 offline
trepctl -host host4 wait -applied 5382
trepctl -host host4 offline
trepctl -host host1 offline
# Reconfiguring server roles and restarting replication
trepctl -host host2 setrole -role master
trepctl -host host2 online
trepctl -host host1 setrole -role slave -uri thl://host2:2112
trepctl -host host1 online
trepctl -host host3 setrole -role slave -uri thl://host2:2112
trepctl -host host3 online
trepctl -host host4 setrole -role slave -uri thl://host2:2112
trepctl -host host4 online
--------------------------------------------------------------------------------------
Topology: 'MASTER_SLAVE'
--------------------------------------------------------------------------------------
# node host1
cookbook  [slave]   seqno:      5,384  - latency:   2.530 - ONLINE

# node host2
cookbook  [master]  seqno:      5,384  - latency:   2.446 - ONLINE

# node host3
cookbook  [slave]   seqno:      5,384  - latency:   2.595 - ONLINE

# node host4
cookbook  [slave]   seqno:      5,384  - latency:   2.537 - ONLINE

As you can see from the listing above, The script displays the steps for the switch, using trepctl as a centralized tool.

Under load

After the simple installation in Part 1, we saw that we can test the flow of replication using 'cookbook/test_cluster'. That's a very simple set of operations that merely checks if replication is working. If we want to perform more serious tests, we should apply a demanding load to the replication system.

If you don't have applications that can exercise the servers to your liking, you should be pleased to know that Tungsten Replicator ships with a built-in application for data loading and benchmarking. Inside the expanded tarball, there is a directory named bristlecone, containing the software for such testing tools. There is a detailed set of instructions under './bristlecone/doc'. For the impatient, there is a cookbook recipe that starts a reasonable load with a single command:

$ cookbook/load_data start
# Determining current roles
# Evaluator started with pid 28370
# Evaluator details are available at /home/tungsten/installs/cookbook/tungsten/load/host1/evaluator.job
# Evaluator output can be monitored at /home/tungsten/installs/cookbook/tungsten/load/host1/evaluator.log

$ cat /home/tungsten/installs/cookbook/tungsten/load/host1/evaluator.job
Task started at   : Sun Apr  7 18:20:00 2013
Task started from : /home/tungsten/tinstall/current
Executable        : /home/tungsten/installs/cookbook/tungsten/bristlecone/bin/evaluator.sh
Process ID        : 28370
Using             : /home/tungsten/installs/cookbook/tungsten/load/host1/evaluator.xml
Process id        : /home/tungsten/installs/cookbook/tungsten/load/host1/evaluator.pid
Log               : /home/tungsten/installs/cookbook/tungsten/load/host1/evaluator.log
Database          : host1
Table prefix      : tbl
Host              : host1
Port              : 3306
User              : tungsten
Test duration     : 3600

$  tail /home/tungsten/installs/cookbook/tungsten/load/host1/evaluator.log
18:22:18,672 INFO  1365351738672 10/10 5035.0 ops/sec 0 ms/op 28380 rows/select 41 updates 54 deletes 166 inserts
18:22:20,693 INFO  1365351740693 10/10 4890.0 ops/sec 0 ms/op 26746 rows/select 57 updates 37 deletes 144 inserts
18:22:22,697 INFO  1365351742697 10/10 4986.0 ops/sec 0 ms/op 28183 rows/select 59 updates 46 deletes 162 inserts
18:22:24,716 INFO  1365351744716 10/10 5208.0 ops/sec 0 ms/op 29067 rows/select 51 updates 51 deletes 171 inserts
18:22:26,736 INFO  1365351746736 10/10 4856.0 ops/sec 0 ms/op 27695 rows/select 46 updates 68 deletes 141 inserts
18:22:28,739 INFO  1365351748739 10/10 5022.0 ops/sec 0 ms/op 28269 rows/select 51 updates 58 deletes 145 inserts
18:22:30,758 INFO  1365351750758 10/10 4893.0 ops/sec 0 ms/op 28484 rows/select 47 updates 50 deletes 165 inserts
18:22:32,777 INFO  1365351752777 10/10 4501.0 ops/sec 0 ms/op 26481 rows/select 42 updates 52 deletes 130 inserts
18:22:34,781 INFO  1365351754781 10/10 5057.0 ops/sec 0 ms/op 30450 rows/select 58 updates 53 deletes 157 inserts
18:22:36,801 INFO  1365351756801 10/10 5087.0 ops/sec 0 ms/op 30845 rows/select 55 updates 56 deletes 156 inserts

What happens here?

The evaluator process is started using a file named 'evaluator.xml,' which is generated dynamically. The cookbook recipe detects which is the current master in the replication system and directs the operations there (in our case, it's 'host1'). The same task takes note of the process ID, which will be used to stop the evaluator when done, and the output is sent to a file, where you can look at it if needed.

Looking at evaluator.log, you can see that there are quite a lot of operations going on. Most of them are read queries, as the application was designed to solicit a database server as much as possible. Nonetheless, there are quite a lot of update operations, as a call to 'show_cluster' can confirm.

$ cookbook/show_cluster
--------------------------------------------------------------------------------------
Topology: 'MASTER_SLAVE'
--------------------------------------------------------------------------------------
# node host1
cookbook  [master]  seqno:     30,292  - latency:   0.566 - ONLINE

# node host2
cookbook  [slave]   seqno:     30,277  - latency:   0.531 - ONLINE

# node host3
cookbook  [slave]   seqno:     30,269  - latency:   0.511 - ONLINE

# node host4
cookbook  [slave]   seqno:     30,287  - latency:   0.550 - ONLINE

The load will continue for one hour (unless you defined a different duration). SHould you want to stop it before that period, you can run:

$ cookbook/load_data stop
# Determining current roles
# Stopping Evaluator at pid 28370

One important piece of information about this load application is that it looks for the masters in your cluster, and starts a load in every master. This is useful if you want to test a multi-master topology, as the ones we will see in another article.

If the default behavior of load_data is not what you expect, you can further customize the load by fine tuning the application launcher. First, you run 'load_data' with the print option:

$ cookbook/load_data print
# Determining current roles
$HOME/installs/cookbook/tungsten/bristlecone/bin/concurrent_evaluator.pl \
    --deletes=1 \
    --updates=1 \
    --inserts=3 \
    --test-duration=3600 \
    --host=host1 \
    --port=3306 \
    -u tungsten \
    -p secret  \
    --continuent-root=/home/tungsten/installs/cookbook \
    -d host1 \
    -s /home/tungsten/installs/cookbook/tungsten/load/host1  start

Then, you can copy and paste the resulting command, and eventually run the concurrent_evaluator script with your additions.

There are many options available. The manual is embedded in the application itself:

$ ./bristlecone/bin/concurrent_evaluator.pl --manual

An important option that we can use is --instances=N. This option will launch concurrently the evaluator N times, each time using a different schema. We will use this option to test parallel replication.

Backup

I am not going to stress here how important backups are. I assume (perhaps foolishly) that everyone reading this article knows why. Instead, I want to show how Tungsten Replicator supports backup and restore as integrated methods.

When you install Tungsten, you can add options to select a backup method and fine tune its behavior.

$ ./tools/tungsten-installer --help-master-slave -a |grep backup
--backup-directory            Permanent backup storage directory [$TUNGSTEN_HOME/backups]
                              This directory should be accessible by every replicator to ensure simple operation of backup and restore.
--backup-method               Database backup method (none|mysqldump|xtrabackup|script) [xtrabackup-full]
                              Tungsten integrates with a variety of backup mechanisms. We strongly recommend you configure one of these to help with provisioning servers. Please consult the Tungsten
                              Replicator Guide for more information on backup configuration.
--backup-dump-directory       Backup temporary dump directory [/tmp]
--backup-retention            Number of backups to retain [3]
--backup-script               What is the path to the backup script
--backup-command-prefix       Use sudo when running the backup script? [false]
--backup-online               Does the backup script support backing up a datasource while it is ONLINE [false]

First off, the default directory for backups is under your installation directory ($TUNGSTEN_HOME/backups). If you want to take backups through Tungsten, you must make sure that there is enough storage in that path to hold at least one backup. Tungsten will keep up to three backups in that directory, but you can define this action differently.

Second, the default backup method is 'mysqldump,' not because it is recommended, but because it is widely available. As you probably know, though, if your database is more than a few dozen GB, mysqldump is not an adequate method.

Tungsten Replicator provides support for xtrabackup. If xtrabackup is installed in your servers, you can define it as your default backup method. When you are installing a new cluster, you can do this:

$ export MORE_OPTIONS='-a --backupmethod=xtrabackup --backup-command-prefix=true'
$ cookbook/install_master_slave

If you have just installed and need to reconfigure, you can call 'configure_service' to accomplish the task:

$ cookbook/configure_service -U -a --backup-method=xtrabackup --backup-command-prefix=true cookbook

(Where 'cookbook' is the service name). VERY IMPORTANT: configure_service acts on a single host, and by default it acts on the current host, unless you say otherwise. For example:

$ cookbook/configure_service -U --host=host2 -a --backup-method=xtrabackup --backup-command-prefix=true cookbook

You will have to restart the replicator in node 'host2' for the changes to take effect.

$ ssh host2 "cd $TUNGSTEN_BASE/tungsten/ ; ./cookbook/replicator restart"

Using the backup is quite easy. You only need to call 'trepctl', indicate in which host you want to take a backup, and Tungsten will do the rest.

$ cookbook/trepctl -host host3 backup
Backup completed successfully; URI=storage://file-system/store-0000000001.properties

$ cookbook/trepctl -host host2 backup
Backup completed successfully; URI=storage://file-system/store-0000000001.properties

Apparently, we have two backups with the same contents, taken from two different nodes. However, since we have changed the backup method for host2, we will have a mysqldump small file for host3, and a rather larger xtrabackup file for host2. Again, the cookbook has a method that shows the backups that are available in all the nodes:

$ ./cookbook/backups
   backup-agent : (service: cookbook) mysqldump
     backup-dir : (service: cookbook) /home/tungsten/installs/cookbook/backups/cookbook
# [node: host1] 0 files found
# [node: host2] 3 files found
++ /home/tungsten/installs/cookbook/backups/cookbook
total 2.4G
-rw-r--r-- 1 tungsten tungsten   72 Apr  7 21:52 storage.index
-rw-r--r-- 1 tungsten tungsten 2.4G Apr  7 21:52 store-0000000001-full_xtrabackup_2013-04-07_21-50_59.tar
-rw-r--r-- 1 tungsten tungsten  323 Apr  7 21:52 store-0000000001.properties
drwxr-xr-x 2 tungsten tungsten 4.0K Apr  7 21:52 xtrabackup

# [node: host3] 3 files found
++ /home/tungsten/installs/cookbook/backups/cookbook
total 6.3M
-rw-r--r-- 1 tungsten tungsten   72 Apr  7 21:50 storage.index
-rw-r--r-- 1 tungsten tungsten 6.3M Apr  7 21:50 store-0000000001-mysqldump_2013-04-07_21-50_28.sql.gz
-rw-r--r-- 1 tungsten tungsten  315 Apr  7 21:50 store-0000000001.properties

# [node: host4] 0 files found

WARNING: This example was here only to show how to change the backup method. It is NOT recommended to have mixed methods for backups in different nodes. Unless you have a specific need, and understand the consequence of this choice, you should have the same backup method everywhere.

Restore

A backup is only good if you can use to restore your data. Using the same method shown to take a backup, you can restore your data. For this example, let's use mysqldump in all nodes (just because it's quicker), and show the operations for a backup and restore.

First, we take a backup in node 'host3', and then we will restore the data in 'host2'.

$ cookbook/trepctl -host host3 backup
Backup completed successfully; URI=storage://file-system/store-0000000001.properties

$ cookbook/backups
   backup-agent : (service: cookbook) mysqldump
     backup-dir : (service: cookbook) /home/tungsten/installs/cookbook/backups/cookbook
# [node: host1] 0 files found
# [node: host2] 0 files found
# [node: host3] 3 files found
++ /home/tungsten/installs/cookbook/backups/cookbook
total 6.2M
-rw-r--r-- 1 tungsten tungsten   72 Apr  7 22:05 storage.index
-rw-r--r-- 1 tungsten tungsten 6.1M Apr  7 22:05 store-0000000001-mysqldump_2013-04-07_22-05_43.sql.gz
-rw-r--r-- 1 tungsten tungsten  315 Apr  7 22:05 store-0000000001.properties

# [node: host4] 0 files found

Now, we have the backup files in host3, but we have an issue in host2, and we need to take a restore there. Assuming that the database server is unusable (this is usually the case when we must take a restore), we have the unpleasant situation where the backups are in one node, and we need to use in another. In a well organized environment, we would have a shared storage for the backup directory, and thus we could just move ahead and perform our restore. In this case, though, we have no such luxury. Then, we use yet another feature of the cookbook:

$ cookbook/copy_backup
syntax: copy_backup SERVICE SOURCE_NODE DESTINATION_NODE

$ cookbook/copy_backup cookbook host3 host2
# No message = success

$ cookbook/backups
   backup-agent : (service: cookbook) mysqldump
     backup-dir : (service: cookbook) /home/tungsten/installs/cookbook/backups/cookbook
# [node: host1] 0 files found
# [node: host2] 3 files found
++ /home/tungsten/installs/cookbook/backups/cookbook
total 6.2M
-rw-r--r-- 1 tungsten tungsten   72 Apr  7 22:05 storage.index
-rw-r--r-- 1 tungsten tungsten 6.1M Apr  7 22:05 store-0000000001-mysqldump_2013-04-07_22-05_43.sql.gz
-rw-r--r-- 1 tungsten tungsten  315 Apr  7 22:05 store-0000000001.properties

# [node: host3] 3 files found
++ /home/tungsten/installs/cookbook/backups/cookbook
total 6.2M
-rw-r--r-- 1 tungsten tungsten   72 Apr  7 22:05 storage.index
-rw-r--r-- 1 tungsten tungsten 6.1M Apr  7 22:05 store-0000000001-mysqldump_2013-04-07_22-05_43.sql.gz
-rw-r--r-- 1 tungsten tungsten  315 Apr  7 22:05 store-0000000001.properties

# [node: host4] 0 files found

The 'copy_backup' command has copied the files from one host to another, and now we are ready to perform a restore in host2.

$ cookbook/trepctl -host host2 restore
Operation failed: Restore operation failed: Operation irrelevant in current state

Hmm. Probably not the friendliest of error messages. What this scoundrel means is that it can't perform a restore when the replicator is online.

$ cookbook/trepctl -host host2 offline
$ cookbook/trepctl -host host2 restore
Restore completed successfully

$ cookbook/trepctl -host host2 services
Processing services command...
NAME              VALUE
----              -----
appliedLastSeqno: 17955
appliedLatency  : 0.407
role            : slave
serviceName     : cookbook
serviceType     : local
started         : true
state           : ONLINE
Finished services command...

The restore operation was successful. We could have used xtrabackup just as well. THe only difference is that the operation takes way longer.

Parallel replication

Slave lagging is a common occurrence in MySQL replication. Most of the time, the reason for this problem is that while the master updates data using many threads concurrently, the slave applies the replication stream using a single thread. In Tungsten there is a built-in feature that applies changes in parallel, when the updates are happening in separate schemas. For database servers that are sharded by database or for the ones the serve multi-tenancy application, this is an ideal case. It is likely that the action happens in several schemas at once, and thus Tungsten can parallelize the changes successfully. Notice, however, that if you are running operations using a single schema, parallel replication won't give you any relief. Also, the operations must be really independent from each other. If a schema has foreign keys that reference to another schema, or if a transaction mixes data from two or more schemas, Tungsten will stop parallelizing and resume working in single thread until the end of the unclean operation, resulting in an overall decrease of performance, instead of increase.

To activate parallel replication, you need to enable two options:

  • --channels=N where you indicate how many parallel threads you want to establish. You should indicate as many channels as the number of schemas where you are operating. Some benchmarks will help you find the limits. Defining too many schemas will eventually exhaust the system resources. If the number of schemas is larger than the channels, Tungsten will use the channels in a round-robin fashion.
  • --svc-parallelization-type=disk: This option will activate a fast queue-creation algorithm that acts directly to the THL files. Contrary to common perception, where one would believe that in-memory queues are faster, this method is very efficient and less likely to exhaust system resources.

If you want to install all the servers with parallel replication, you can do this:

$ export MORE_OPTIONS='-a --channels=5 --svc-parallelization-type=disk'
$ cookbook/install_master_slave

If you need parallel replication only on one particular slave service, you can enable parallel replication there, using 'configure_service', same as we have seen before for the backup-method.

In this example, we're going to use the second method

$ cookbook/configure_service -U -a --host=host4 --channels=5 --svc-parallelization-type=disk cookbook
WARN  >> host4 >> THL schema tungsten_cookbook already exists at tungsten@host4:3306 (WITH PASSWORD)
NOTE >> host4 >> Deployment finished

$ cookbook/replicator restart
Stopping Tungsten Replicator Service...
Stopped Tungsten Replicator Service.
Starting Tungsten Replicator Service...

Now parallel replication is enabled. But how do we make sure that the service has indeed been enhanced?

The quickest method is to check the Tungsten service schema. Every replicator service creates a database schema named 'tungsten_$SERVICE_NAME', where it stores the current replication status. For example, in our default system, where the only replication service is called 'cookbook', we will find a schema named 'tungsten_cookbook'. The table that we want to inspect is one named 'trep_commit_seqno', where we store the global transaction ID, the schema where the transaction was applied, the data origin, and the time stamps at extraction and apply time. What is relevant in this table is that there will be one record for each channel that we have enabled. Thus, in host2 and host3 there will be only one line, while in host4 we should find 5 lines.

There is one useful recipe to get this result at once:

$ cookbook/query_all_nodes 'select count(*) from tungsten_cookbook.trep_commit_seqno'
+----------+
| count(*) |
+----------+
|        1 |
+----------+
+----------+
| count(*) |
+----------+
|        1 |
+----------+
+----------+
| count(*) |
+----------+
|        1 |
+----------+
+----------+
| count(*) |
+----------+
|        5 |
+----------+

Right! So we have 5 channels. Before inspecting what is going on in these channels, let's apply some load. You may recall that our load_data script can show you a command that we can customize for our purpose.

$ cookbook/load_data print
/home/tungsten/installs/cookbook/tungsten/bristlecone/bin/concurrent_evaluator.pl \
    --deletes=1 \
    --updates=1 \
    --inserts=3 \
    --test-duration=3600 \
    --host=host1 \
    --port=3306 \
    -u tungsten \
    -p secret  \
    --continuent-root=/home/tungsten/installs/cookbook \
    -d host1 \
    -s /home/tungsten/installs/cookbook/tungsten/load/host1 start

We just copy-and-paste this command, adding --instances=5 at the end, and we get 5 messages indicating that an evaluator was started. Let's see:

$ cookbook/query_node host4 'show schemas'
+--------------------+
| Database           |
+--------------------+
| information_schema |
| host11             |
| host12             |
| host13             |
| host14             |
| host15             |
| mysql              |
| test               |
| tungsten_cookbook  |
+--------------------+

Since we indicated that the database was to be named 'host1' and we have asked for 5 instances, the evaluator has created host11. host12, and so on.

Now that there is some action, we can have a look at our replication. Rather than querying the database directly, asking for the contents of trep_commit_seqno, we use another cookbook recipe:

$ cookbook/tungsten_service all
# node: host1 - service: cookbook
+--------+-----------+-----------------+----------+---------------------+---------------------+
| seqno  | source_id | applied_latency | shard_id | update_timestamp    | extract_timestamp   |
+--------+-----------+-----------------+----------+---------------------+---------------------+
| 324246 | host1     |               1 | host12   | 2013-04-07 23:02:16 | 2013-04-07 23:02:15 |
+--------+-----------+-----------------+----------+---------------------+---------------------+
# node: host2 - service: cookbook
+--------+-----------+-----------------+----------+---------------------+---------------------+
| seqno  | source_id | applied_latency | shard_id | update_timestamp    | extract_timestamp   |
+--------+-----------+-----------------+----------+---------------------+---------------------+
| 324383 | host1     |               0 | host13   | 2013-04-07 23:02:16 | 2013-04-07 23:02:16 |
+--------+-----------+-----------------+----------+---------------------+---------------------+
# node: host3 - service: cookbook
+--------+-----------+-----------------+----------+---------------------+---------------------+
| seqno  | source_id | applied_latency | shard_id | update_timestamp    | extract_timestamp   |
+--------+-----------+-----------------+----------+---------------------+---------------------+
| 324549 | host1     |               0 | host13   | 2013-04-07 23:02:16 | 2013-04-07 23:02:16 |
+--------+-----------+-----------------+----------+---------------------+---------------------+
# node: host4 - service: cookbook
+--------+-----------+-----------------+----------+---------------------+---------------------+
| seqno  | source_id | applied_latency | shard_id | update_timestamp    | extract_timestamp   |
+--------+-----------+-----------------+----------+---------------------+---------------------+
| 324740 | host1     |               0 | host11   | 2013-04-07 23:02:16 | 2013-04-07 23:02:16 |
| 324736 | host1     |               0 | host12   | 2013-04-07 23:02:16 | 2013-04-07 23:02:16 |
| 324739 | host1     |               0 | host13   | 2013-04-07 23:02:16 | 2013-04-07 23:02:16 |
| 324737 | host1     |               0 | host14   | 2013-04-07 23:02:16 | 2013-04-07 23:02:16 |
| 324735 | host1     |               0 | host15   | 2013-04-07 23:02:16 | 2013-04-07 23:02:16 |
+--------+-----------+-----------------+----------+---------------------+---------------------+

Here you see that host1 has only one channel: it is the master, and it must serialize according to the binary log. Slaves host2 and host3 have only one channel, because we have enabled parallel replication only in host4. And finally we see that in host4 there are 5 channels, each showing a different shard_id (= database schema), with its own transaction ID being applied. This shows that replication is working.

Tungsten Replicator has, however, several tools that help monitoring parallel replication:

$ cookbook/trepctl -host host4 status -name stores
Processing status command (stores)...
NAME                      VALUE
----                      -----
activeSeqno             : 475395
doChecksum              : false
flushIntervalMillis     : 0
fsyncOnFlush            : false
logConnectionTimeout    : 28800
logDir                  : /home/tungsten/installs/cookbook/thl/cookbook
logFileRetainMillis     : 604800000
logFileSize             : 100000000
maximumStoredSeqNo      : 475449
minimumStoredSeqNo      : 0
name                    : thl
readOnly                : false
storeClass              : com.continuent.tungsten.replicator.thl.THL
timeoutMillis           : 2147483647
NAME                      VALUE
----                      -----
criticalPartition       : -1
discardCount            : 0
estimatedOfflineInterval: 0.0
eventCount              : 457459
headSeqno               : 475415
intervalGuard           : AtomicIntervalGuard (array is empty)
maxDelayInterval        : 60
maxOfflineInterval      : 5
maxSize                 : 10
name                    : parallel-queue
queues                  : 5
serializationCount      : 0
serialized              : false
stopRequested           : false
store.0                 : THLParallelReadTask task_id=0 thread_name=store-thl-0 hi_seqno=475415 lo_seqno=17957 read=457459 accepted=93357 discarded=364102 events=0
store.1                 : THLParallelReadTask task_id=1 thread_name=store-thl-1 hi_seqno=475415 lo_seqno=17957 read=457459 accepted=92567 discarded=364892 events=0
store.2                 : THLParallelReadTask task_id=2 thread_name=store-thl-2 hi_seqno=475415 lo_seqno=17957 read=457459 accepted=91197 discarded=366262 events=0
store.3                 : THLParallelReadTask task_id=3 thread_name=store-thl-3 hi_seqno=475415 lo_seqno=17957 read=457459 accepted=90492 discarded=366967 events=0
store.4                 : THLParallelReadTask task_id=4 thread_name=store-thl-4 hi_seqno=475415 lo_seqno=17957 read=457459 accepted=89846 discarded=367613 events=0
storeClass              : com.continuent.tungsten.replicator.thl.THLParallelQueue
syncInterval            : 10000
Finished status command (stores)...

This command shows the status of parallel replication in each channels. Notable information in this screen:

  • eventCount is the number of transaction being processed
  • serializationCount:0 means that all events have been parallelized, and there was no need to serialize any.
  • 'read' ... 'accepted' ... 'discarded' are the operation in the disk queue. Each channel parses all the events, and queues only the ones that belong in its shard.
$ cookbook/trepctl -host host3 status -name shards
Processing status command (shards)...
 ...
NAME                VALUE
----                -----
appliedLastEventId: mysql-bin.000006:0000000169567337;0
appliedLastSeqno  : 660707
appliedLatency    : 1.314
eventCount        : 130325
shardId           : host11
stage             : q-to-dbms
NAME                VALUE
----                -----
appliedLastEventId: mysql-bin.000006:0000000169566006;0
appliedLastSeqno  : 660702
appliedLatency    : 1.312
eventCount        : 129747
shardId           : host12
stage             : q-to-dbms
 ...

This command (only a portion reported here) displays the status of each shard, showing for each one which event, transaction ID and event count were recorded.

There should be much more to mention about the monitoring tools, but for now I want just to mention a last important point. When the replicator goes offline, parallel replication stops, and the replication operations are consolidated into a single thread. This makes sure that replication can later resume using a single thread, or it can be safely handed over to native MySQL replication. This behavior also makes sure that a slave can be safely promoted to master. A switch operation requires that the slave service be offline before being reconfigured to become a master. When the replicator goes offline, the N channels become 1.

$ ./cookbook/trepctl offline
$ cookbook/tungsten_service all
# node: host1 - service: cookbook
+--------+-----------+-----------------+----------+---------------------+---------------------+
| seqno  | source_id | applied_latency | shard_id | update_timestamp    | extract_timestamp   |
+--------+-----------+-----------------+----------+---------------------+---------------------+
| 769652 | host1     |               0 | host12   | 2013-04-07 23:18:07 | 2013-04-07 23:18:07 |
+--------+-----------+-----------------+----------+---------------------+---------------------+
# node: host2 - service: cookbook
+--------+-----------+-----------------+----------+---------------------+---------------------+
| seqno  | source_id | applied_latency | shard_id | update_timestamp    | extract_timestamp   |
+--------+-----------+-----------------+----------+---------------------+---------------------+
| 769699 | host1     |               0 | host13   | 2013-04-07 23:18:08 | 2013-04-07 23:18:08 |
+--------+-----------+-----------------+----------+---------------------+---------------------+
# node: host3 - service: cookbook
+--------+-----------+-----------------+----------+---------------------+---------------------+
| seqno  | source_id | applied_latency | shard_id | update_timestamp    | extract_timestamp   |
+--------+-----------+-----------------+----------+---------------------+---------------------+
| 769866 | host1     |               0 | host15   | 2013-04-07 23:18:08 | 2013-04-07 23:18:08 |
+--------+-----------+-----------------+----------+---------------------+---------------------+
# node: host4 - service: cookbook
+--------+-----------+-----------------+----------+---------------------+---------------------+
| seqno  | source_id | applied_latency | shard_id | update_timestamp    | extract_timestamp   |
+--------+-----------+-----------------+----------+---------------------+---------------------+
| 767064 | host1     |               0 | host15   | 2013-04-07 23:18:01 | 2013-04-07 23:18:01 |
+--------+-----------+-----------------+----------+---------------------+---------------------+

If we put it back online, we see again the channels expanding.


Further info:

Installing and Administering Tungsten Replicator - Part 1 - basics

Intro

Tungsten Replicator is an open source tool that does high performance replication across database servers. It was designed to replace MySQL replication, although it also supports replication from and to Oracle and other systems. In this article, we will only cover MySQL replication, both simple and multi-master.

Preparing for installation

To follow the material in this article, you will need a recent build of Tungsten Replicator. You can get the latest ones from http://bit.ly/tr20_builds. In this article, we are using build 2.0.8-167.

Before starting any installation, you should make sure that you have satisfied all the prerequisites. Don't underestimate the list. Any missing items will likely result in installation errors.

If you are using Amazon EC2 servers, this page provides a script that makes the prerequisites an almost fully automated procedure.

To install any of the topologies supported by Tungsten, you need first to extract the software, define your nodes, and eventually change your default options.

  1. Download the software from http://bit.ly/tr20_builds
  2. Expand the tarball (tar -xzf tungsten-replicator-2.0.8-136.tar.gz)
  3. Change directory to the extracted path (cd tungsten-replicator-2.0.8-167)
  4. Define the VERBOSE user variable (it will show much details in the operations
  5. Edit the configuration files COMMON_NODES.sh and USER_VALUES.sh (The fields in RED are the ones that you should probably change.)
$ tar -xzf tungsten-replicator-2.0.8-167.tar.gz
$ cd tungsten-replicator-2.0.8-167
$ export VERBOSE=1
$ export PATH=$PWD/cookbook:$PATH

$ cat cookbook/COMMON_NODES.sh
#!/bin/bash
# (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
# Version 1.0.5 - 2013-04-03

export NODE1=host1
export NODE2=host2
export NODE3=host3
export NODE4=host4

$ cat cookbook/USER_VALUES.sh
#!/bin/bash
# (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
# Version 1.0.5 - 2013-04-03

# User defined values for the cluster to be installed.

# Where to install Tungsten Replicator
export TUNGSTEN_BASE=$HOME/installs/cookbook

# Directory containing the database binary logs
export BINLOG_DIRECTORY=/var/lib/mysql

# Path to the options file
export MY_CNF=/etc/my.cnf

# Database credentials
export DATABASE_USER=tungsten
export DATABASE_PASSWORD=secret
export DATABASE_PORT=3306

# Name of the service to install
export TUNGSTEN_SERVICE=cookbook

Pay attention to the paths. TUNGSTEN_BASE is where the binaries will be installed. You must make sure that:

  • The path is writable by the current user, in all nodes;
  • There is nothing in that path that may clash with the software to be installed. It should be a dedicated directory. Do NOT use your $HOME for this purpose. Use a subdirectory or a path under /usr/local or /opt/
  • The path must have enough storage to hold Tungsten Transaction History Logs (THL). They will occupy roughly twice as much as your binary logs.

Validating your nodes

You may think that you have followed the instructions for the prerequisites, but sometimes humans make mistakes. To make sure that your cluster can run Tungsten, you can run the script validate_cluster. This is an operation that performs all the installation checks in all nodes, without actually installing anything.

$ cookbook/validate_cluster
# Performing validation check ...
## 1 (host: host4)
./tools/tungsten-installer \
    --master-slave \
    --master-host=host1 \
    --datasource-user=tungsten \
    --datasource-password=secret \
    --datasource-port=3306 \
    --service-name=cookbook \
    --home-directory=/home/tungsten/installs/cookbook \
    --cluster-hosts=host1,host2,host3,host4 \
    --datasource-mysql-conf=/etc/my.cnf \
    --datasource-log-directory=/var/lib/mysql \
    --rmi-port=10000 \
    --thl-port=2112 \
    --validate-only -a \
    --info \
    --start
INFO  >> Start: Check that the master-host is part of the config
INFO  >> Finish: Check that the master-host is part of the config
#####################################################################
# Tungsten Community Configuration Procedure
#####################################################################
NOTE:  To terminate configuration press ^C followed by ENTER
...
...
...
( LOTS OF LINES FOLLOW)

If there is any error in your prerequisites, you will get an error, or possibly more than one. If the messages provided by this command are not enough to understand what it is going on, you can ask for yet more detail, using:

$ VERBOSE=2 ./cookbook/validate_cluster

DRY-RUN installation

Should you need to install in a set of nodes where you can't allow ssh connection across nodes, you may use the DRYRUN variable.

$ export DRYRUN=1

When this variable is set, the installation commands will not install, but only show you all the commands that you should run, with the right sequence.

For example, if you want to validate the cluster without SSH communication between nodes, a DRYRUN command will tell you the list of instructions to run and in which hosts to run them

$ DRYRUN=1 ./cookbook/validate_cluster
# Performing validation check ...
...

Using the instructions so received, you can copy the software to each node, and run the appropriate command in each one.

The same goes for every installation command. Should you need to install a star topology node by node with custom options, just run:

$ DRYRUN=1 ./cookbook/install_star

When you don't need the DRYRUN command anymore, remove the variable:

$ unset DRYRUN

Installing a master-slave topology

After the validation, you can launch your installation. If the topology is master/slave, the defaults are stored in cookbook/NODES_MASTER_SLAVE.sh

$ cat cookbook/NODES_MASTER_SLAVE.sh
#!/bin/bash
# (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
# Version 1.0.5 - 2013-04-03

CURDIR=`dirname $0`
if [ -f $CURDIR/COMMON_NODES.sh ]
then
    . $CURDIR/COMMON_NODES.sh
else
    export NODE1=
    export NODE2=
    export NODE3=
    export NODE4=
fi

export ALL_NODES=($NODE1 $NODE2 $NODE3 $NODE4)
# indicate which servers will be masters, and which ones will have a slave service
# in case of all-masters topologies, these two arrays will be the same as $ALL_NODES
# These values are used for automated testing

#for master/slave replication
export MASTERS=($NODE1)
export SLAVES=($NODE2 $NODE3 $NODE4)

The only variables that should concern you here are MASTERS and SLAVES. They refer to the nodes defined in COMMON_NODES.sh. If your master is NODE1, there is no need to change anything. If your master is, say, NODE2, then change the variables as:

export MASTERS=($NODE2)
export SLAVES=($NODE1 $NODE3 $NODE4)

Make sure that you have adjusted both master and slave definitions.

$ cookbook/install_master_slave
## 1 (host: host4)
./tools/tungsten-installer \
    --master-slave \
    --master-host=host1 \
    --datasource-user=tungsten \
    --datasource-password=secret \
    --datasource-port=3306 \
    --service-name=cookbook \
    --home-directory=/home/tungsten/installs/cookbook \
    --cluster-hosts=host1,host2,host3,host4 \
    --datasource-mysql-conf=/etc/my.cnf \
    --datasource-log-directory=/var/lib/mysql \
    --rmi-port=10000 \
    --thl-port=2112 \
    --start
... # A few minutes later ...
--------------------------------------------------------------------------------------
Topology: 'MASTER_SLAVE'
--------------------------------------------------------------------------------------
# node host1
cookbook  [master]  seqno:          1  - latency:   0.631 - ONLINE

# node host2
cookbook  [slave]   seqno:          1  - latency:   0.607 - ONLINE

# node host3
cookbook  [slave]   seqno:          1  - latency:   0.746 - ONLINE

# node host4
cookbook  [slave]   seqno:          1  - latency:   0.640 - ONLINE

Deployment completed
Topology      :'master_slave'
Tungsten path : /home/tungsten/installs/cookbook
Nodes         : (host1 host2 host3 host4)

After installing all the nodes, the cookbook script displays the cluster status. In this list, 'cookbook' is the name of the replication service, as defined in USER_VALUES.sh. You can change it before installing. Any name will do. Next to it you see the role (master or slave), then the 'seqno', which is the global transaction ID of your database events. Finally, the 'latency' is the difference, in seconds, between the time your transaction was recorded in the master binary logs and the time it was applied to the slave.

You can ask for such a status at any time, by calling:

$ cookbook/show_cluster

Simple replication administration

A cluster status doesn't tell you if replication is working. You may check if this is true by running:

$ cookbook/test_cluster
# --------------------------------------------------------------------------------------
# Testing cluster with installed topology 'MASTER_SLAVE'
# --------------------------------------------------------------------------------------
ok - Master host1 has at least 1 master services
# slave: host2
ok - Tables from master #1
ok - Views from master #1
ok - Records from master #1
ok - Slave host2 has at least 1 services
# slave: host3
ok - Tables from master #1
ok - Views from master #1
ok - Records from master #1
ok - Slave host3 has at least 1 services
# slave: host4
ok - Tables from master #1
ok - Views from master #1
ok - Records from master #1
ok - Slave host4 has at least 1 services
1..13

This command creates a table and a view in each master in your topology (in this case, a master/slave topology has only one master), insert a record using the view, and then check that each slave has replicated what was inserted. The output changes quite a lot when using a multi-master topology.

Astute readers will recognize that the output produced here complies with the Test Anything Protocol (TAP). If you have the 'prove' tool installed in your server, you may try it:

$ prove cookbook/test_cluster
cookbook/test_cluster...ok
All tests successful.
Files=1, Tests=13,  4 wallclock secs ( 3.17 cusr +  0.26 csys =  3.43 CPU)

Replication tools

The cluster status shown above (cookbook/show_cluster) uses the output of the Tungsten built-in tool trepctl to display a simplified status.

The tool is available inside the installation directory. If you have used the defaults, it is $HOME/installs/cookbook. ANd the tools are in $HOME/installs/cookbook/tungsten/tungsten-replicator/bin/.

This is not easy to remember, and even if you can remember it correctly, it requires a lot of typing. The cookbook provides an easy shortcut: cookbook/trepctl. For example:

$ cookbook/trepctl services
Processing services command...
NAME              VALUE
----              -----
appliedLastSeqno: 17
appliedLatency  : 0.773
role            : slave
serviceName     : cookbook
serviceType     : local
started         : true
state           : ONLINE
Finished services command...

Or, if you want the simplified output:

$ cookbook/trepctl services | cookbook/simple_services
cookbook  [slave]   seqno:         17  - latency:   0.773 - ONLINE

To administer the system properly, you need to know the tools, some paths to the logs and the configuration files, which are somehow elusive. Again, the cookbook to the rescue:

$ cookbook/paths
     replicator : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/bin/replicator
        trepctl : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/bin/trepctl
            thl : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/bin/thl
            log : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/log/trepsvc.log
           conf : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/conf/
        thl-dir : (service: cookbook) /home/tungsten/installs/cookbook/thl/cookbook
     backup-dir : (service: cookbook) /home/tungsten/installs/cookbook/backups/cookbook
   backup-agent : (service: cookbook) mysqldump

This command tells you the path to the three main tools:

  • trepctl Tungsten Replicator Control
  • thl or the Transaction History Log manager
  • replicator, which is the launcher for the replicator daemon.

You also get the path to the most common places you may need to access during your administrative tasks.

Similarly, there are shortcuts to perform common tasks:

  • cookbook/replicator: Shortcut to the 'replicator' command
  • cookbook/trepctl: Shortcut to the 'trepctl' command
  • cookbook/thl: Shortcut to the 'thl' command
  • cookbook/conf: Shows the configuration files using 'less'
  • cookbook/show_conf: Same as 'conf'
  • cookbook/edit_conf: Edits the configuration files using 'vim'
  • cookbook/vimconf: Same as 'vimconf.sh'
  • cookbook/emacsconf: Edits the configuration files using 'emacs'
  • cookbook/log: Shows the replicator log using 'less'
  • cookbook/show_log: Same as 'log'
  • cookbook/vilog: Edits the replicator log using 'vi'
  • cookbook/vimlog: Edits the replicator log using 'vim'
  • cookbook/emacslog: Edits the replicator log using 'emacs'
  • cookbook/heartbeat: Performs a heartbeat in each master
  • cookbook/paths: Shows the path to all important tools and services
  • cookbook/services: Performs 'trepctl services'
  • cookbook/backups: Shows which backups were taken in all nodes

You can get all the above commands, and all the others included in the cookbook, by calling:

$ cookbook/help

Uninstalling Tungsen

The cookbook makes it easy to install a replication cluster, and makes it easy to remove it as well.

If you look at the end of cookbook/USER_VALUES.sh, you will see these variables:

$ tail -n 12 cookbook/USER_VALUES.sh
##############################################################################
# Variables used when removing the cluster
# Each variable defines an action during the cleanup
##############################################################################
[ -z "$STOP_REPLICATORS" ]            && export STOP_REPLICATORS=1
[ -z "$REMOVE_TUNGSTEN_BASE" ]        && export REMOVE_TUNGSTEN_BASE=1
[ -z "$REMOVE_SERVICE_SCHEMA" ]       && export REMOVE_SERVICE_SCHEMA=1
[ -z "$REMOVE_TEST_SCHEMAS" ]         && export REMOVE_TEST_SCHEMAS=1
[ -z "$REMOVE_DATABASE_CONTENTS" ]    && export REMOVE_DATABASE_CONTENTS=0
[ -z "$CLEAN_NODE_DATABASE_SERVER" ]  && export CLEAN_NODE_DATABASE_SERVER=1
##############################################################################

The names are self-explanatory. These variables are used when you call the clear_cluster command. Then, the meaning becomes even more clear:

$ cookbook/clear_cluster
--------------------------------------------------------------------------------------
Clearing up cluster with installed topology 'MASTER_SLAVE'
--------------------------------------------------------------------------------------
!!! WARNING !!!
--------------------------------------------------------------------------------------
'clear-cluster' is a potentially damaging operation.
This command will do all the following:
* Stop the replication software in all servers. [$STOP_REPLICATORS]
* REMOVE ALL THE CONTENTS from /home/tungsten/installs/cookbook/.[$REMOVE_TUNGSTEN_BASE]
* REMOVE the tungsten_<service_name> schemas in all nodes (host1 host2 host3 host4) [$REMOVE_SERVICE_SCHEMA]
* REMOVE the schemas created for testing (test, evaluator) in all nodes (host1 host2 host3 host4)  [$REMOVE_TEST_SCHEMAS]
* Create the test server anew;                [$CLEAN_NODE_DATABASE_SERVER]
* Unset the read_only variable;               [$CLEAN_NODE_DATABASE_SERVER]
* Set the binlog format to MIXED;             [$CLEAN_NODE_DATABASE_SERVER]
* Reset the master (removes all binary logs); [$CLEAN_NODE_DATABASE_SERVER]
If this is what you want, either set the variable I_WANT_TO_UNINSTALL
or answer 'y' to the question below
You may also set the variables in brackets to fine tune the execution.
Alternatively, have a look at cookbook/clear_cluster and customize it to your needs.
--------------------------------------------------------------------------------------
Do you wish to uninstall this cluster? [y/n]

As you can see, for each action there is a corresponding variable. By default, all variables are active, except 'REMOVE_DATABASE_CONTENTS'. Setting or unsetting these variables will determine how much of your installation you want to undo.

Getting replication status

Once you have replication up and running, you need to know what's going on at a glance. We have seen in the previous sections that we can call trepctl services to get an overview of the replication process. Using the same tool, we can also get more detailed information

$ cookbook/trepctl status | nl
 1 Processing status command...
 2 NAME                     VALUE
 3 ----                     -----
 4 appliedLastEventId     : mysql-bin.000006:0000000000003163;0
 5 appliedLastSeqno       : 17
 6 appliedLatency         : 0.773
 7 channels               : 1
 8 clusterName            : default
 9 currentEventId         : NONE
10 currentTimeMillis      : 1365193975129
11 dataServerHost         : host4
12 extensions             :
13 latestEpochNumber      : 0
14 masterConnectUri       : thl://host1:2112/
15 masterListenUri        : thl://host4:2112/
16 maximumStoredSeqNo     : 17
17 minimumStoredSeqNo     : 0
18 offlineRequests        : NONE
19 pendingError           : NONE
20 pendingErrorCode       : NONE
21 pendingErrorEventId    : NONE
22 pendingErrorSeqno      : -1
23 pendingExceptionMessage: NONE
24 pipelineSource         : thl://host1:2112/
25 relativeLatency        : 22729.129
26 resourcePrecedence     : 99
27 rmiPort                : 10000
28 role                   : slave
29 seqnoType              : java.lang.Long
30 serviceName            : cookbook
31 serviceType            : local
32 simpleServiceName      : cookbook
33 siteName               : default
34 sourceId               : host4
35 state                  : ONLINE
36 timeInStateSeconds     : 23640.125
37 uptimeSeconds          : 23640.723
38 version                : Tungsten Replicator 2.0.8 build 136
39 Finished status command...

With the line number as a reference, we can describe quite a bit of useful information:

  • appliedLastEventId: (4) This is the event as found in the source database master. Since we are replicating from a MySQL server (don't forget that Tungsten can replicate from and to several heterogeneous servers) this ID is made of the binary log file name (mysql-bin.000006) and the binary log position (0000000000003163). Most of the time, you don't really need this information, as everything in Tungsten uses the Global Transaction ID (see next item)
  • appliedLastSeqno: (5) This is the Global Transaction Identifier for the current transaction.
  • appliedLatency: (6) This is the time difference, in seconds, between the moment when the transaction was written to the binary log in the master and the moment when it was applied in the slave. Notice that, if the server system times are not synchronized, you may have greater differences than expected. Also, if you keep a slave offline and re-connect it later, this value will increase accordingly.
  • channels (7) is the number of threads used for replication. By default it is 1. When using parallel replication, it increases.
  • dataServerHost: (11) The server for which we are showing the status.
  • masterConnectUri (14) is the address (hostname or IP + port ) of the current master for this service.
  • masterListenUri (15) is the address that will be used by the current server if it becomes a master.
  • pendingErrorSeqno: (22) When any of the error* lines (19 to 21) are used, this line shows the seqno (Global Transaction ID) of the event that is causing trouble. This piece of information is vital to find what is holding the system. (We will see an example later in this article)
  • role: (28) What is the role of this service. It could be 'master' or 'slave'. More roles are possible if the replicator is embedded in a more complex system.
  • serviceName: (30) The identification of the replication service. Not much important when using a master/slave topology, but vital when deploying multi-master services.
  • state: (35) It's what the replicator is doing. If "ONLINE," all is well. "OFFLINE:NORMAL" means that the service was stopped manually, while "OFFLINE:ERROR" means that something is wrong. If you see "GOING-ONLINE:SYNCHRONIZING," it means that either there is a connection issue between master and slave, or the slave is showing this state if the master is offline.

This command is the first step whenever you are troubleshooting a problem. If something goes wrong, chances are that 'cookbook/trepctl status' will tell you what it is going on. Notice, though, that if you are using a multi-master topology, then you will need to specify a service:

$ cookbook/trepctl -service somename status

It's quite important to understand that trepctl can give you the status of any node in the replication cluster. You don't need to execute the command in another node. All you need to do is indicate to trepctl for which host it should display the status.

$ cookbook/trepctl -host host1 -service somename status

'trepctl' has quite a lot of options, as you may discover if you run 'trepctl help'. We will see some of them in this series of articles.

Logs

The second step of troubleshooting, when 'trepctl status' was not enough to nail the problem, is looking at the logs.

Here, the problem you will face is "where the heck do I find the logs?"

As we have seen above in this article, the cookbook can show you the paths:

$ cookbook/paths
     replicator : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/bin/replicator
        trepctl : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/bin/trepctl
            thl : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/bin/thl
            log : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/log/trepsvc.log
           conf : /home/tungsten/installs/cookbook/tungsten/tungsten-replicator/conf/
        thl-dir : (service: cookbook) /home/tungsten/installs/cookbook/thl/cookbook
     backup-dir : (service: cookbook) /home/tungsten/installs/cookbook/backups/cookbook
   backup-agent : (service: cookbook) mysqldump

However, there is a simpler way. You can use one of the shortcuts to access the logs. For example, cookbook/log will show the log using 'less,' the well known file viewer. Should you want to use another tool for this task, there is a wide choice:

  • cookbook/show_log: Same as 'log'
  • cookbook/vilog: Edits the replicator log using 'vi'
  • cookbook/vimlog: Edits the replicator log using 'vim'
  • cookbook/emacslog: Edits the replicator log using 'emacs'

Inside the log, when you are troubleshooting, you should first try to find the same message displayed by 'trepctl status.' Around that point, you will find one or more Java stack traces, which contain information useful for the developers (file names and line numbers) and information useful for you (error messages as reported by the database server or the operating system or third party tools, which may help identifying the problem).

Reading events

Most often, when a problem has been identified, you need to know which is the event that is causing the problem. Usually, a look at the SQL, combined with the error message, may give you enough information to fix the problem.

The replication events (or transaction) are stored in several Transaction History Log (THL) files. These files contain the events, as taken from the binary logs, plus some metadata. Unlike the binary logs, though, the THL file names are totally unimportant. Since transactions are identified by number, you don't need to know their location.

To display a THL event, you use a tool named, most aptly, 'thl.' For example, after we run this query:

mysql --host=host1 test -e "insert into v1 values (2,'inserted by node #1')"

We can check the status with

$ cookbook/trepctl services
Processing services command...
NAME              VALUE
----              -----
appliedLastSeqno: 24
appliedLatency  : 0.563
role            : slave
serviceName     : cookbook
serviceType     : local
started         : true
state           : ONLINE
Finished services command...

And then retrieve the event using the thl.

$ cookbook/thl list -seqno 24
SEQ# = 24 / FRAG# = 0 (last frag)
- TIME = 2013-04-05 23:32:18.0
- EPOCH# = 18
- EVENTID = mysql-bin.000006:0000000000004417;0
- SOURCEID = host1
- METADATA = [mysql_server_id=10;dbms_type=mysql;service=cookbook;shard=test]
- TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
- OPTIONS = [##charset = ISO8859_1, autocommit = 1, sql_auto_is_null = 1, foreign_key_checks = 1, unique_checks = 1, sql_mode = '', character_set_client = 8, collation_connection = 8, collation_server = 8]
- SCHEMA = test
- SQL(0) = insert into v1 values (2,'inserted by node #1') /* ___SERVICE___ = [cookbook] */

There is much metadata in this event, most of which is easily recognizable by any seasoned DBA. Some things that may be worth pointing out are:

  • SEQ# The sequence number, or seqno, or Global Transaction ID,
  • EVENTID: We have seen this when we described 'trepctl status';
  • SOURCEID: the server where the event was generated;
  • service: The service where the event was generated. This also tells us that the master role for this service is in host1.
  • shard: it is how Tungsten defines shards for parallel replication and conflict resolution. By default, a shard matches a database schema, although it can be defined otherwise.
  • SQL : this is the statement being executed. When the transaction contains more than one statement, then you will see SQL(1), SQL(2), and so on. If the event was row-based, then you will see a list of column and their contents instead of a SQL statement.
  • ___SERVICE___ = [cookbook] This comment is added by the replicator to make it recognizable even after it goes to the binary log and gets replicated to a further level. This is not the only method used to mark events. The service identification can go in other places, such as the "comment" field of a "CREATE TABLE" statement.

Skipping transactions

One of the most common replication problems is a duplicate key violation, which in turn often occurs when someone erroneously writes to a slave instead of a master. When such error happens, you may find something like this:

$ cookbook/trepctl status
Processing status command...
NAME                     VALUE
----                     -----
appliedLastEventId     : NONE
appliedLastSeqno       : -1
appliedLatency         : -1.0
channels               : -1
clusterName            : default
currentEventId         : NONE
currentTimeMillis      : 1365199283287
dataServerHost         : host4
extensions             :
latestEpochNumber      : -1
masterConnectUri       : thl://host1:2112/
masterListenUri        : thl://host4:2112/
maximumStoredSeqNo     : -1
minimumStoredSeqNo     : -1
offlineRequests        : NONE
pendingError           : Event application failed: seqno=25 fragno=0 message=java.sql.SQLException:
Statement failed on slave but succeeded on master
pendingErrorCode       : NONE
pendingErrorEventId    : mysql-bin.000006:0000000000004622;0
pendingErrorSeqno      : 25
pendingExceptionMessage: java.sql.SQLException: Statement failed on slave but succeeded on master
                         insert into v1 values (3,'inserted by node #1') /* ___SERVICE___ = [cookbook] */
pipelineSource         : UNKNOWN
relativeLatency        : -1.0
resourcePrecedence     : 99
rmiPort                : 10000
role                   : slave
seqnoType              : java.lang.Long
serviceName            : cookbook
serviceType            : unknown
simpleServiceName      : cookbook
siteName               : default
sourceId               : host4
state                  : OFFLINE:ERROR
timeInStateSeconds     : 8.749
uptimeSeconds          : 28948.881
version                : Tungsten Replicator 2.0.8 build 136

Looking at the logs, we may see something like this:

INFO   | jvm 1    | 2013/04/06 00:01:14 | 2013-04-06 00:01:14,529 [cookbook - q-to-dbms-0] ERROR pipeline.SingleThreadStageTask
Event application failed: seqno=25 fragno=0 message=java.sql.SQLException: Statement failed on slave but succeeded on master
INFO   | jvm 1    | 2013/04/06 00:01:14 | com.continuent.tungsten.replicator.applier.ApplierException: java.sql.SQLException: Statement failed on slave but succeeded on master
INFO   | jvm 1    | 2013/04/06 00:01:14 |       at com.continuent.tungsten.replicator.applier.MySQLDrizzleApplier.applyStatementData(MySQLDrizzleApplier.java:183)
INFO   | jvm 1    | 2013/04/06 00:01:14 |       at com.continuent.tungsten.replicator.applier.JdbcApplier.apply(JdbcApplier.java:1321)
INFO   | jvm 1    | 2013/04/06 00:01:14 |       at com.continuent.tungsten.replicator.applier.ApplierWrapper.apply(ApplierWrapper.java:101)
INFO   | jvm 1    | 2013/04/06 00:01:14 |       at com.continuent.tungsten.replicator.pipeline.SingleThreadStageTask.apply(SingleThreadStageTask.java:639)
INFO   | jvm 1    | 2013/04/06 00:01:14 |       at com.continuent.tungsten.replicator.pipeline.SingleThreadStageTask.runTask(SingleThreadStageTask.java:468)
INFO   | jvm 1    | 2013/04/06 00:01:14 |       at com.continuent.tungsten.replicator.pipeline.SingleThreadStageTask.run(SingleThreadStageTask.java:167)
INFO   | jvm 1    | 2013/04/06 00:01:14 |       at java.lang.Thread.run(Unknown Source)
INFO   | jvm 1    | 2013/04/06 00:01:14 | Caused by: java.sql.SQLException: Statement failed on slave but succeeded on master
INFO   | jvm 1    | 2013/04/06 00:01:14 |       at com.continuent.tungsten.replicator.applier.MySQLDrizzleApplier.applyStatementData(MySQLDrizzleApplier.java:140)
INFO   | jvm 1    | 2013/04/06 00:01:14 |       ... 6 more
INFO   | jvm 1    | 2013/04/06 00:01:14 | Caused by: java.sql.SQLIntegrityConstraintViolationException: Duplicate entry '3' for key 'PRIMARY'

After inspecting the tables in all nodes, we find that the host4 already contains a record with Primary Key=3, and that it has the same contents of the record coming from host1. In this case, the easiest way of fixing the error is by telling the replicator to skip this event.

$ cookbook/trepctl online -skip-seqno 25

After this, the replicator goes online, and, provided that there are no other errors after the first one, will continue replicating.

Taking over existing Replication

In the first sections of this article, we saw how to install Tungsten replicator as the primary source of replication. We assumed that the servers had the same contents, and there was no replication already going on. Here we assume, instead, that there was replication already, and we show the steps to reproduce the process.

To simulate the initial status, we're going to clear the cluster installed before, install native MySQL replication instead, and take over from there.

There is a recipe to install standard replication, just for this purpose.

$ cookbook/install_standard_mysql_replication
Starting slave on host2 Master File = mysql-bin.000005, Master Position = 106
Starting slave on host3 Master File = mysql-bin.000005, Master Position = 106
Starting slave on host4 Master File = mysql-bin.000005, Master Position = 106
# master  host1
mysql-bin.000005    554
#slave host2
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 554
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
          Exec_Master_Log_Pos: 554
replication test: ok

#slave host3
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 554
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
          Exec_Master_Log_Pos: 554
replication test: ok

#slave host4
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 554
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
          Exec_Master_Log_Pos: 554
replication test: ok

The installation also provides a simple test that checks if replication is running by creating a table named 't1' and retrieving it in the slaves. As you can see, after the test, the slaves are at position 554 of binary log # 000005. If we create another table, we check that it is replicated and take nota again of the binlog position.

$ mysql -h host1 -e 'create table test.test_standard(i int)'

$ for host in host1 host2 host3 host4; do mysql -h $host -e 'show tables from test' ; done
+----------------+
| Tables_in_test |
+----------------+
| t1             |
| test_standard  |
+----------------+
+----------------+
| Tables_in_test |
+----------------+
| t1             |
| test_standard  |
+----------------+
+----------------+
| Tables_in_test |
+----------------+
| t1             |
| test_standard  |
+----------------+
+----------------+
| Tables_in_test |
+----------------+
| t1             |
| test_standard  |
+----------------+
$ for host in  host2 host3 host4; do mysql -h $host -e 'show slave status\G' | grep 'Master_Log_File\|Read_Master_Log_Pos\|Running' ; done
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 651
        Relay_Master_Log_File: mysql-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 651
        Relay_Master_Log_File: mysql-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 651
        Relay_Master_Log_File: mysql-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

So, replication is running. We can try our take-over script, which the cookbook provides:

$ cookbook/take-over
./tools/tungsten-installer
    --master-slave
    --master-host=host1
    --datasource-user=tungsten
    --datasource-password=secret
    --datasource-port=3306
    --service-name=cookbook
    --home-directory=/home/tungsten/installs/cookbook
    --cluster-hosts=host1,host2,host3,host4
    --datasource-mysql-conf=/etc/my.cnf
    --datasource-log-directory=/var/lib/mysql
    --rmi-port=10000
    --thl-port=2112 -a
    --auto-enable=false
    --start

$TUNGSTEN_BASE/tungsten/tungsten-replicator/bin/trepctl -port 10000 -host host1 online -from-event mysql-bin.000005:651
$TUNGSTEN_BASE/tungsten/tungsten-replicator/bin/trepctl -port 10000 -host host2 online
$TUNGSTEN_BASE/tungsten/tungsten-replicator/bin/trepctl -port 10000 -host host3 online
$TUNGSTEN_BASE/tungsten/tungsten-replicator/bin/trepctl -port 10000 -host host4 online
--------------------------------------------------------------------------------------
Topology: 'MASTER_SLAVE'
--------------------------------------------------------------------------------------
# node host1
cookbook  [master]  seqno:          5  - latency:   0.556 - ONLINE

# node host2
cookbook  [slave]   seqno:          5  - latency:   0.663 - ONLINE

# node host3
cookbook  [slave]   seqno:          5  - latency:   0.690 - ONLINE

# node host4
cookbook  [slave]   seqno:          5  - latency:   0.595 - ONLINE

What happens here?

The first notable thing is that we install the replicator with the option --auto-enable set to false. With this option, the replicator starts, but stays OFFLINE. After that, the take-over script stops the replication in all servers, retrieves the latest binlog position, and tells the replicator to go ONLINE using the event ID. This is one of the few cases where we can't use a global transaction ID, because it does not exist yet!

Next, all the slaves go online. There is no need to tell them at which event they should start, because they will simply get the events from the master.


Further info: