0% found this document useful (0 votes)
298 views

GTID Based Replication For MySQL High Availability 0570

The document discusses using GTID-based replication for MySQL high availability. It provides an outline covering high availability basics, MySQL replication as an HA solution, how coordinate and GTID replication work, and an Amazon cloud-based HA architecture using GTID replication and ZFS snapshots. A brief history of MySQL replication enhancements is also presented.

Uploaded by

manas4u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
298 views

GTID Based Replication For MySQL High Availability 0570

The document discusses using GTID-based replication for MySQL high availability. It provides an outline covering high availability basics, MySQL replication as an HA solution, how coordinate and GTID replication work, and an Amazon cloud-based HA architecture using GTID replication and ZFS snapshots. A brief history of MySQL replication enhancements is also presented.

Uploaded by

manas4u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 48

Using

GTID-based Replication
for
MySQL High Availability
Jacob Nikom

November 11, 2013


11/11/2013

Slide 1

Outline

11/11/2013

High Availability (HA) Basics

What is HA and why do we need it?


Data Centers (DC) Downtime Causes and Consequences
High Availability Levels
How to Achieve Necessary Level of HA

MySQL Replication as High Availability Solution

Major Oracle MySQL HA Solutions


Their Advantages and Disadvantages
Brief History of MySQL Replication
Replication Enhancements in MySQL 5.6

How Coordinate Replication Works

Replication Data Files


Replication Execution
Replication Binary Log Coordinates
HA and Coordinate Replication

How GTID Replication Works

What is GTID?
How to Configure GTID Replication?
GTID Replication Basics
Coordinate Replication Failover
GTID Replication Failover

Amazon Cloud-based HA Architecture

AWS Main Components


AWS Failure Modes and Effects
Failover with GTID Replication and ZFS Snapshots
Failover Prototype and Demonstration

Summary

Slide number 2

What is HA? Why Do We Need it?


Availability of the service is a percentage of the time
when the system is able to provide the service
(Service Availability: Principles and Practice by Maria Toeroe, Francis Tam, 2012)

High Availability for Data Centers usually means:

Guaranteed Throughput (number of transactions per second)


Guaranteed Response time (latency)
Guaranteed Uptime/Downtime per year (in percentiles/seconds, minutes, hours)

Definitions of some important HA terms

Uptime and Downtime


o The proportion of time a high availability service is up or down over the total time. Normally, uptime + downtime = 100%.
Single point of failure (SPOF)
o An isolated device or piece of software for which a failure will cause a downtime of the HA service.
o The goal of an HA architecture is to remove the SPOFs.
Failover and Switchover
o Switching to a redundant or standby computer server.
o Usually failover is automatic and operates without warning while switchover requires human intervention
Fencing/Stonith
o Often, an HA architecture is stuck by a non-responsive device that is not releasing a critical resource.
o Fencing or Stonith (Shoot The Other Node In The Head) is then required.
Cluster
o A group of computers acting together to offer a service
Fault Tolerance
o Ability to handle failures with graceful degradation. Not all components need the same level of fault tolerance
Disaster Recovery
o The plan and technologies to restore the service in case of disaster. Often longer downtime allowed in this case.

11/11/2013

Slide number 3

What is HA? Why do we need it (cont.)?


High Availability vs. Continuous Availability

A highly available system allows planned outages

A continuously available system does not allow planned outages, essentially supporting no downtime
operations

High Availability vs. Fault Tolerance

A fault tolerant system in case of a component failure has no service interruption (higher solution cost)

A highly available system has a minimal service interruption (lower solution cost)

Why we are so interested in High Availability?

11/11/2013

Slide number 4

DC Downtime Causes and Consequences


What Causes Data Center Service Downtime?

System Failures
o
Hardware Faults
o
Software bugs or crashes
Physical Disasters
Scheduled Maintenance
User Errors

Baron Schwartz, Percona, 2011


9.7%

MySQL Servers Downtime Causes

- Operating Environment
- Performance
- Replication
- Data Loss & Corruption

What Are the System Downtime Effect and Impact?

Effect:

Service Unavailability
Bad response time

35.1%

20.8%

34.4%

Impact:

11/11/2013

Revenue loss
Poor customer relationships
Reduced employee productivity
Regulatory issues

Slide number 5

High Availability Levels


Availability Level often associated with UPTIME
Availability %

Downtime per year

Downtime per month

Downtime per week

90% ("one nine")

36.5 days

72 hours

16.8 hours

99% ("two nines")

3.65 days

7.20 hours

1.68 hours

99.9% ("three nines")

8.76 hours

43.8 minutes

10.1 minutes

99.99% ("four nines")

52.56 minutes

4.32 minutes

1.01 minutes

99.999% ("five nines")

5.26 minutes

25.9 seconds

6.05 seconds

99.9999% ("six nines")

31.5 seconds

2.59 seconds

0.605 seconds

Easy to calculate losses due to unavailability

11/11/2013

Slide number 6

How to Achieve Necessary Level of HA


HA could be achieved by two ways:
1.

Increasing the reliability of each components

2.

Adding the redundant components

The first way is less efficient the reliability of the system will be lower than the reliability of any individual component
The second way is more efficient - reliability of the system will be higher than the reliability of any individual component
Removing Single Point of Failures (SPOF)
#

Component

Technique

Explanation

Storage

RAID

If one disk crashes, the service still works

Servers

Clustering

If one server crashes, the service still works

Power Supply

UPS

If the power source fails, the UPS provides the power


and the service still works

Network

Redundant routers

If a router were to fail connectivity would be preserved by


routing traffic through a redundant connection and the
service still works

Location

Another Data
Center

If a datacenter is destroyed or disconnected, move all


computation to another data center and the service still
works

Why High Availability is so hard with databases?


1.

High availability databases are essentially real-time systems or RTS. Sometimes they are even distributed RTS. That type of
systems are traditionally very difficult to deal with.

2.

Real-time data processing functionality (caches and dirty data logging) forces tight coupling between software and hardware
components. Therefore software redundancy requires redundancy of corresponding hardware as well.

3.

Real-time consistency between data stored on redundant components requires continuous and instantaneous
synchronization. This is difficult to implement without significant overhead.

11/11/2013

Slide number 7

Major HA Solutions Using Oracle MySQL


#

HA Feature

MySQL Replication

DRBD

MySQL NDB Cluster

Platform Support

All supported by MySQL


server

Linux

All Supported by MySQL


cluster

Supported Storage
Engine

Transactionality required
for GTIDs

InnoDB

NDB

Automatic Failover

Yes, with MySQL 5.6


Utilities

Yes, with Corosync +


Pacemaker

Yes

Automatic failover in
about 1 minute with
InnoDB log
files of about 100 MB

1 second or less

Asynchronous+
Synchronous

Synchronous

No, distributed access


nodes

No, distributed access


nodes

Failover Time

5 second + InnoDB
Recovery time

Replication Mode

Asynchronous+
Semi-synchronous

Shared Storage

Number of Nodes

Master + Multiple Slaves

Active/Passive Master +
Multiple Slaves

2555 + Multiple Slaves

Availability Level

99.9%

99.99%

99.999%

11/11/2013

No, distributed access


nodes

Slide number 8

Advantages and Disadvantages


#

HA Solution

MySQL Replication
(before 5.6)

DRBD

Advantages

Simple
Inexpensive
Extends existing database architecture
All the servers can be used, no idle standby
Supports MyISAM
Caches on failover slave are not cold
Online schema changes
Low impact backups
99.9% availability

Variable level of availability (98-99.9+%)


Could be a SPOF
Replication can break
Replication can lag behind
Replication can be out of sync
Not suitable for high write loads
Reads scale only if they are split from writes
Can lose data

No data loss
Much higher write capacity
No SPOF with DRBD
Provides high availability and data integrity across
two servers in the event of hardware or system
failure.
Ensures data integrity by enforcing write
consistency on the primary and secondary nodes.
99.99% availability

High load on the network


Only works with engine supporting auto-recovery
More complex: NIC bounding, fencing, etc.
Requires fencing
A server is standby, idle hardware
Cold cache after failover
No online schema change
Corruption propagation

No Single Point of Failure


Auto-sharding for write-scalability
SQL and NoSQL interfaces
Real-time responsiveness
Active / active geographic replication
Online scaling and schema upgrades
99.999% availability

Incompatible with typical database architecture


Complex, much than other solutions
Needs work on schema and queries for good
performance
Higher skill set required
Poor for large joins
Size of dataset more limited, large memory footprint
Minimum of physical servers

MySQL NDB
Cluster

Disadvantages

MySQL Replication is the most convenient HA Solutions!


Must Increase MySQL Replication Availability!
11/11/2013

Slide number 9

Brief History of MySQL Replication


MySQL 3.23 - Generally Available, January 2001

o
o
o

o MySQL Replication came to be (3.23.15 May 2000).


o Replication filters

MySQL 4.0 - Generally Available, March 2003

o Two Replication Threads instead of just one.


o Slave Relay logs.

o
o

o Replication over SSL.


o Disk synchronization options for binary log.

o
o
o

MySQL 5.0 - Generally Available, October 2005

o Replication of Stored Routines and Triggers.


o Slave retries transactions on transient errors.

MySQL 5.1 - Generally Available, November 2008

o Row-based Replication (RBR).

Available Features

Semi-sync replication.
Replication Heartbeats.
RBR type conversion.

MySQL 5.6 - Generally Available, February 2013


o

MySQL 4.1 - Generally Available, October 2004

MySQL 5.5 - Generally Available, December 2010

Crash-safe Slaves.
Global Transaction Ids.
Replication Event Checksums.
Binary Log Group Commit.
Multi-threaded Slaves.
RBR enhanced.
MySQL Utilities 1.3, GA on August 2013

MySQL 5.7.2 DMR, September 2013


o
o
o

Multi-Threaded Inter-Transactional Replication


Lossless Semi-Synchronous Replication
MySQL Utilities 1.4
MySQL 5.7 DMR
MySQL 5.6
MySQL 5.5

MySQL 5.1
MySQL 3.23
2001

11/11/2013

MySQL 5.0
MySQL
4.1
MySQL 4.0

2003

2005

2007

2009

2011

20013
Slide number 10

Replication Enhancements in MySQL 5.6

Failover & Recovery:

Global Transaction Identifiers (GTID)

Server UUIDs

Crash Safe Slaves & Binary Logs

Replication Failover and Administration Utilities

Data Integrity:

11/11/2013

Replication Event Checksums

Performance:

Multi-Threaded Slaves

Binary Log Group Commit

Optimized Row-Based Replication

Database Operations:

Replication Utilities

Time-Delayed Replication

Remote Binlog Backup

Information Log Events

Slide number 11

How Coordinate Replication Works


1
2
3
4
5
6

7. Replication starts mysql> START SLAVE;


8. IO thread starts and initiates dump thread on the master
9 9. Dump thread reads events from binary log
10 10. Dump thread sends events to IO thread from the slave
11 11. IO thread writes events into relay log
12 12. IO thread updates master.info file parameters
13 13. SQL thread reads relay log events
14 14. SQL thread executes events on the slave
15 15. SQL thread updates relay-log.info file
7

1. Master server enables binary log


2. Client commits query to the master
3. Master executes the query and commits it
4. Master stores the query in the binary log as en event
5. Master returns to the client with commit confirmation
6. Slave server configures replication

mysql> CHANGE MASTER TO


MASTER_HOST='12.34.56.789',MASTER_USER='slave_user',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS= 107;

Client
Slave
5

2
Commit

Master

Return to Client

14

7
1

3
9

Execute

Dump Thread

10

11/11/2013

IO Thread

11

13

SQL Thread

12
1 mysqld-bin.index

1
binary logs

6 master.info

6 mysqld-relay-bin.index

15
6 relay-log.info
relay logs

Slide number 12

Replication Data Files


Master Server 5.5

Slave Server 5.5

[root@node1 data]# ls -l
-rw-r-----rw-r-----rw-r----drwx------rw-rw----rw-rw----rw-rw----rw-rw----rw-r----drwx-----drwx------

1
1
1
2
1
1
1
1
1
2
2

mysql
mysql
mysql
mysql
mysql
mysql
mysql
mysql
mysql
mysql
mysql

mysql 144703488 Oct 22 22:33 ibdata1


mysql 67108864 Oct 22 22:33 ib_logfile0
mysql 67108864 Oct 22 22:33 ib_logfile1
mysql
81 May 20 23:21 mysql
mysql
332 Oct 22 22:30 mysqld-bin.000001
mysql
354 Oct 22 22:33 mysqld-bin.000002
mysql
40 Oct 22 22:31 mysqld-bin.index
mysql
5 Oct 22 22:31 mysqld.pids
root
12616 Oct 25 03:20 mysql-error.err
mysql
55 May 20 23:21 performance_schema
mysql
2 Oct 22 22:33 test

[root@node1 data]#

[root@node1 data]#
-rw-r----- 1 mysql
-rw-r----- 1 mysql
-rw-r----- 1 mysql
-rw-rw---- 1 mysql
drwx------ 2 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
drwx------ 2 mysql
drwx------ 2 mysql

ls l
mysql 144703488 Oct 27 19:47 ibdata1
mysql 67108864 Oct 27 19:47 ib_logfile0
mysql 67108864 Oct 27 19:47 ib_logfile1
mysql
60 Oct 22 22:31 master.info
mysql
81 May 20 23:21 mysql
mysql
6 Oct 22 22:31 mysqld.pids
mysql
205 Oct 22 22:31 mysqld-relay-bin.000001
mysql
526 Oct 22 22:33 mysqld-relay-bin.000002
mysql
52 Oct 22 22:31 mysqld-relay-bin.index
root
11309 Oct 22 22:31 mysql-error.err
mysql
58 Oct 22 22:31 relay-log.info
mysql
55 May 20 23:21 performance_schema
mysql
2 Oct 22 22:33 test

File mysqld-bin.index

File mysqld-relay-bin.index

[root@node1 data]# more /usr/local/mysql/data/mysqld-bin.index

[root@node1 data]# more /usr/local/mysql/data/mysqld-relay-bin.index

/usr/local/mysql/data/mysqld-bin.000001
/usr/local/mysql/data/mysqld-bin.000002

/usr/local/mysql/data/mysqld-relay-bin.000001
/usr/local/mysql/data/mysqld-relay-bin.000002

[root@node1 data]#

[root@node1 data]#

Events Layout on a Binary Log File (or Relay Log File)

Transactional group
11/11/2013

COMMIT

BEGIN

Ev1

server_id

Ev2

server_id

Ev1

File based log that records the changes on the master.


Statement or Row based format (may be intermixed).
Split into transactional groups containing multiple events
Each event contains server_id value.

server_id

BEGIN

server_id

Ev2

COMMIT

Transactional group
Slide number 13

Replication Data Files (cont.)


master.info
1 15
2 mysqld-relay-bin.000001
3 4723
4 node1
5 root
6 kiva
7 3306
8 60
9 0
10 15

Number of lines in the file


Current binlog file being read(Master_Log_File)
Last binlog position read (Read_Master_Log_Pos)
Master host connected to (Master_Host)
Replication user (Master_User)
Replication password
Master port used (Master_Port)
How many times slave will try to reconnect (Connect_Retry)
If SSL is enabled is 1, 0 otherwise
SSL-related information

relay-log.info

11/11/2013

1 ./mysqld-relay-bin.000001

Relay log file (Relay_Log_File)

2 874

Relay log position (Relay_Log_Pos)

3 mysql-bin.000001

Master log file (Relay_Master_Log_File)

4 729

Master log position (Exec_Master_Log_Pos)

Slide number 14

Coordinate Replication Execution


Replication coordinates:
1. Master binary log file name (Master_Log_File) - the name of the particular binary log on the master (like mysqld-bin.000001)
2. Master binary log position (Binary_Log_Pos) the number of the last event executed on the master (end of the binlog file)
3. Position in the master binary log where IO thread read to (Read_Master_Log_Pos)
4. Position in the master binary log where SQL thread executed to (Exec_Master_Log_Pos)
5. Slave relay log name (Relay_Log_File) the name of the particular relay log on the salve (like mysqld-relay-bin.000001)
6. Slave relay log position where SQL thread executed to (Relay_Log_Pos) the last event in the relay log on the slave
mysql> SHOW MASTER STATUS;
+---------------+----------+--------------+------------------+
| File
| Position | Binlog_Do_DB | Binlog_Ignore_DB |
+---------------+----------+--------------+------------------+
| mysql-bin.003 | 77
|
|
|
+---------------+----------+--------------+------------------+

File: MySQL binary log file on the master


Position: Last executed position (next write) in the binary log. If the slave caught
up with the master, it should execute next events from this position.

Master Binary Log


Last read event on the slave (Read_Master_Log_Pos)

57

Last executed event on the slave (Exec_Master_Log_Pos)

67

76

77

Last executed event on the master (binlog Position)

Slave Relay Log


Last executed event on the slave (Relay_Log_Pos)

11/11/2013

56

57

Slide number 15

Replication Binary Log Coordinates


mysql> SHOW SLAVE STATUS\G
*************************** 1.
Slave_IO_State:
Master_Host:
Master_User:
Master_Port:
Connect_Retry:
Master_Log_File:
Read_Master_Log_Pos:
Relay_Log_File:
Relay_Log_Pos:
Relay_Master_Log_File:
Slave_IO_Running:
Slave_SQL_Running:
. . . . . . . . . . . . . . .:
Last_Errno: 0 Last_Error:
Skip_Counter:
Exec_Master_Log_Pos:
Relay_Log_Space:
Until_Condition:
Until_Log_File:
Until_Log_Pos:
Master_SSL_Allowed:
. . . . . . . . . . . . . . .:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert:
Last_IO_Errno:
Last_IO_Error:
Last_SQL_Errno:
Last_SQL_Error:
1 row in set (0.00 sec)

row ***************************
Waiting for master to send event
127.0.0.1
master_user
26768
60
mysql-bin.000001 (IO Thread reads this file)
4723 (Position in master binary log file where IO Thread has read to)
mysqld-relay-bin.000001
874 (Position in the relay log file where SQL thread read and executed events
mysql-relay-bin.000001
Yes
Yes
. . . . . .
0
729 (Position in master binary log file that SQL Thread read and executed up to
1042 The total combined size of all existing relay log files
None
0
No
. . .

Coordinates usage examples:

No
0

Connect to the master using masters binary log


slave> CHANGE MASTER TO
MASTER_HOST='12.34.56.789',MASTER_USER='slave_user',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=4723;

Failover
Master

Slave1

Slave2

11/11/2013

Master
Crashed!

Slave1

Connect to the new master/old slave using slaves relay log


slave> CHANGE MASTER TO
MASTER_HOST='12.34.56.789',MASTER_USER='slave_user',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-relay-bin.000001', MASTER_LOG_POS=729;

Slave2

Slide 16

HA and Coordinate Replication

Coordinate based replication is great it is easy to setup


Coordinate based replication is bad it is difficult to failover

When the master fails, the slaves are ready to replace it


However, the process of failure detection and acting upon in case of multiple
servers requires significant DBA intervention
Difficult to follow changes through a complex replication stream that go to multiple
servers

How to Improve It?


If every transaction has its own globally unique identifier (GTID), it becomes a lot easier to track changes

Advantages

It is possible to identify a transaction uniquely across the replication servers.


Make the automation of failover process much easier. There is no need to do calculations, inspect the binary log and
so on. Just execute the command MASTER_AUTO_POSITION=1.
At application level it is easier to do WRITE/READ split. After a write on the MASTER you have a GTID so just check if
that GTID has been executed on the SLAVE that you use for reads.
Development of new automation tools isnt a pain now.

Drawbacks

11/11/2013

Additional complexity
Incompatibility with existing solution coordinate based replication

Slide number 17

What is GTID?
Where GTID comes from?
# ls -l /usr/local/mysql/data
total 537180
-rw-r----- 1 mysql mysql
56
drwx------ 2 mysql mysql
4096
-rw-r----- 1 mysql mysql 348127232
-rw-rw---- 1 mysql mysql 100663296
-rw-rw---- 1 mysql mysql 100663296
drwx------ 2 mysql mysql
32768
drwx------ 2 mysql mysql
4096
-rw-rw---- 1 mysql mysql
6
-rw-r----- 1 mysql root
9131
drwx------ 2 mysql mysql
4096
drwxr-xr-x 2 mysql mysql
4096

Oct
Oct
Oct
Oct
Oct
Oct
Oct
Oct
Oct
Oct
Oct

17
17
17
17
17
17
17
17
17
17
17

10:49
10:49
11:58
11:58
11:24
10:55
10:49
11:58
11:58
10:49
10:49

[root@jnikom-linux data]# more auto.cnf


[auto]
server-uuid=965d996a-fea7-11e2-ba15-001e4fb6d589
[root@jnikom-linux data]#

auto.cnf
bench/
ibdata1
ib_logfile0
ib_logfile1
mhs/
mysql/
mysqld.pids
mysql-error.err
performance_schema/
test/

965d996a-fea7-11e2-ba15-001e4fb6d589:1

Server identifier 128-bit identification number (SERVER_UUID).


It logically identifies the server where the transaction was originated.
Every server has its own SERVER_UUID.
If you deleted it it will be regenerated after you restarted your server
GTID is written into binary log

TIN 64-bit transaction identification number.


A sequence number incremented with every
new transaction.
It starts with 1. There is no 0

MySQL 5.6 binary log


GTID

BEGIN

Ev1

Ev2

Transactional group
11/11/2013

COMMIT GTID

BEGIN

Ev1

Ev2

COMMIT

Transactional group
Slide number 18

How to Configure GTID Replication?


my.cnf new additional parameters

gtid_mode
It could be ON or OFF (not 1 or 0)
It enables the GTID on the server
log_bin (existed)
Enables binary logs
Mandatory to create a replication
log-slave-updates
Slave servers must log its changes
Needed for server
promotion/demotion
enforce-gtid-consistency
Forces the server to be safe by
using only transactional tables
Non-transactional statements are
denied by the server.

New replication configuration command


slave> CHANGE MASTER TO MASTER_HOST=node1',
MASTER_USER=roor',
MASTER_PASSWORD=kiva',
MASTER_AUTO_POSITION=1;

mysql> SHOW SLAVE STATUS\G


*************************** 1.
Slave_IO_State:
Master_Host:
Master_User:
Master_Port:
Connect_Retry:
Master_Log_File:
Read_Master_Log_Pos:
Relay_Log_File:
Relay_Log_Pos:
Relay_Master_Log_File:
Slave_IO_Running:
Slave_SQL_Running:
. . . . . . . . . . . . . .:
Last_Errno:
Last_Error:
Skip_Counter:
Exec_Master_Log_Pos:
Relay_Log_Space:
Until_Condition:
Until_Log_File:
Until_Log_Pos:
Master_SSL_Allowed:
. . . . . . . . . . . . . .:
Seconds_Behind_Master:
Master_SSL_Verify_Server_Cert:
Last_IO_Errno:
Last_IO_Error:
Last_SQL_Errno:
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id:
Master_UUID:
Master_Info_File:
SQL_Delay:
SQL_Remaining_Delay:
Slave_SQL_Running_State:
Master_Retry_Count:
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position:

row ***************************
Waiting for master to send event
node1
root
3306
60
mysqld-bin.000002
354
mysqld-relay-bin.000002
526
mysqld-bin.000002
Yes
Yes
0
0
354
731
None
0
No
0
No
0
0

28
b9ff49a4-3b50-11e3-85a5-12313d2d286c
mysql.slave_master_info
0
NULL
Slave has read all relay log; waiting for the slave I/O thread
86400

b9ff49a4-3b50-11e3-85a5-12313d2d286c:2
b9ff49a4-3b50-11e3-85a5-12313d2d286c:1-2
1

1 row in set (0.00 sec)

11/11/2013

Slide number 19

GTID Replication Basics


Each

server has binary log (master and slave)


GTIDs are written into binary log
GTIDs executed on a server contained in a new, read-only, global server variable GTID_EXECUTED
GTID_EXECUTED holds the range of GTIDs committed on this server as a string

Binary log

Database
Server

0EB3E4DB-4C31-42E6-9F55-EEBBD608511C:1
0EB3E4DB-4C31-42E6-9F55-EEBBD608511C:2
4D8B564F-03F4-4975-856A-0E65C3105328:1
0EB3E4DB-4C31-42E6-9F55-EEBBD608511C:3
4D8B564F-03F4-4975-856A-0E65C3105328:2

mysql> SELECT @@GLOBAL.GTID_EXECUTED;


+----------------------------------------------------------------------------------+
| @@GLOBAL.GTID_EXECUTED
|
+----------------------------------------------------------------------------------+
| 0EB3E4DB-4C31-42E6-9F55-EEBBD608511C:1-3,
4D8B564F-03F4-4975-856A-0E65C3105328:1-2 |
+----------------------------------------------------------------------------------+
1 row in set (0.00 sec)

For each server binary log serves as GTID repository


11/11/2013

Slide number 20

GTID Replication Basics (cont.)

GTIDs set possesses both cardinal and ordinal properties


Two sets of GTIDs could be compared and sorted at the same time
Those properties define powerful model for tracking transactions

master>

slave>

SELECT @@GLOBAL.GTID_EXECUTED;
+------------------------------------------------+
| @@GLOBAL.GTID_EXECUTED
|
+------------------------------------------------+
| 4D8B564F-03F4-4975-856A-0E65C3105328:1-1000000 |
+------------------------------------------------+

SELECT @@GLOBAL.GTID_EXECUTED;
+-----------------------------------------------+
| @@GLOBAL.GTID_EXECUTED
|
+-----------------------------------------------+
| 4D8B564F-03F4-4975-856A-0E65C3105328:1-999999 |
+-----------------------------------------------+

It is easy to find:

1. One transaction is missing 2. Which one is missing

New Replication Protocol


1.
2.

When slave connects to the master, it sends the range of GTIDs that slave has executed and committed
In response the master sends all other transactions, i.e. those that the slave has not yet executed
Binary
log
Id1
Trx1
Id2
Trx2
Id3
Trx3

Binary
log

Id1:Trx1, Id2:Trx2

Master

Id3:Trx3

Slave

Id1
Trx1
Id2
Trx2

SQL command to tell the server to use the new protocol is: CHANGE MASTER TO MASTER_AUTO_POSITION = 1;

If MASTER_AUTO_POSITION = 1, you cannot specify MASTER_LOG_FILE or MASTER_LOG_POS.

11/11/2013

Slide number 21

Coordinate Replication Failover


Switching to the new master
1.
2.

Find new master binary log coordinates (file name and position) using SHOW MASTER STATUS command
Switch to the new master using CHANGE MASTER TO MASTER_HOST command using new master
binary log coordinates

master> SHOW MASTER STATUS;

Client
Master

Slave1

+------------------+----------+--------------+------------------+
| File
| Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 | 12345
|
|
|
+------------------+----------+--------------+------------------+

Binary log
File: mysql-bin.000007
Position: 345

Tedious and error-prone procedure!

11/11/2013

Slave2

Binary log
File: mysql-bin.000006
Position: 23456

Slave3

Relay log
File: mysql-relay-bin.000008
Position: 5678

Slide number 22

GTID Replication Failover


Switching to the new master
1.

Switch to the new master using CHANGE MASTER TO MASTER_AUTO_POSITION = 1; command

Client
Master

Slave1

master> SHOW MASTER STATUS;


+------------------+----------+--------------+------------------+--------------------------------------------+
| File
| Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_set
|
+------------------+----------+--------------+------------------+--------------------------------------------+
| mysql-bin.000002 | 12345
| test
| manual,mysql
| 5ffd0c1b-cd65-12c4-21b2-ab91a9429562:1-555
+------------------+----------+--------------+------------------+--------------------------------------------+

Binary log
Executed_Gtid_set:
5ffd0c1b-cd65-12c4-21b2-ab91a9429562:1-500

Easy and error free!

11/11/2013

Slave2

Binary log
Executed_Gtid_set:
5ffd0c1b-cd65-12c4-21b2-ab91a9429562:1-400

Slave3

Binary log
Executed_Gtid_set:
5ffd0c1b-cd65-12c4-21b2-ab91a9429562:1-300

Slide number 23

Amazon Cloud-based HA Architecture


Regions and Availability Zones (AZ) (as of 07-24-2-12)

Country

Region

State/City

AZ

USA

US-West

Oregon

A,B

USA

US-West

California

A,B,C

USA

US-East

Virginia

A,B,C,D,E

Brazil

San Paulo

San Paulo

A,B

Ireland

EU

Dublin

A,B,C

Japan

Asia-Pacific

Tokyo

A,B

Singapore

Asia-Pacific

Singapore

A,B

11/11/2013

07-24-2012

Connection
points

Connectivity
Type

Average
Latencies[1],
[2]

Region-toAnother-Region

WAN

100 500 ms

AZ-to-Another-AZ

LAN

10-50 ms

AZ-to-Same-AZ

LAN

2 - 10 ms

Slide number 24

AWS Main Components


EC2 instance

AMI
Directly attached
Ephemeral
Storage

Definition: EC2 instance is a server running MHS application using Amazon Machine Image (AMI) software.
Properties:
1.
2.

The server can fail due to own hardware problems or due to AZ outage.
Performance varies up to 60% between instance of the same type.

Price: depends upon instance type ($0.03 3.10 per hour)


Definition: Ephemeral or instance storage is the HDD or SSD directly attached to the EC2 instance (physical node).
This storage exists for every single EC2 instance even if it's not used
Performance: HDD 0.1 ms, SSD 0.001 ms, no network latency
Properties:
1.
Good for short term persistence
2.
The fastest and the most predicable performance
3.
It is not shared with other instances
4.
Does not rely on network for its access
5.
After an accidental reboot, like power outage, the content of the storage remains intact and readily
available
6.
After shutdown the content of the storage no longer exists. Therefore it has to be copied periodically
to EBS or S3 to ensure long term persistence
Price: it comes completely free of charge including I/O operations.

EC2 instance

Definition: RDS is instance of MySQL server running on an EC2 platform. Persistent storage (for back-ups, etc.) is
allocated in EBS volumes. However, neither can you access the underlying EC2 instance nor can you use S3 to
access your stored database snapshots. Since you do not get access to the native EC2 instance, you cannot install
additional software on the MySQL host.

Network attached

Multi-AZ deployment - RDS automatically provisions and manages a standby replica in a different AZ. Database
updates are made synchronously on the primary and standby resources to prevent replication lag. In the event of
planned database maintenance, DB Instance failure, or an AZ failure, RDS automatically failovers to the up-to-date
standby so that database operations can resume quickly without administrative intervention. Prior to failover you
cannot directly access the standby, and it cannot be used to serve read traffic.

RDS

Read Replicas You can create replicas of a given source DB Instance that serve high-volume application read
traffic from multiple copies of your data. RDS uses MySQLs asynchronous replication to propagate changes made to
a source DB Instance to any associated Read Replicas.
Price: $0.4 - $0.8 per hour

12/5/12 6:52 PM

11/11/2013

2
5

Slide number 25

AWS Main Components (continued)


EBS

Definition: EBS provides block level storage volumes for use with EC2 instances. The volumes are networkattached, and persist independently from the life of an instance that it is attached to
Performance: HDD 0.1 ms; network latency 2 ms
Properties:
1.
Good for short and medium term persistence
2.
The performance varies with out the Provisioned IOPS and Optimized instances (not all
instances)
3.
Throughput is hared with other instances
4.
Rely on network for its access
5.
After an accidental reboot, like power outage, the content of the storage remains intact.
However, the
availability could be impacted by the network overloading with multi-tenant recoveries
EBS provisions a specific level of I/O performance by choosing a Provisioned IOPS volume. EBS volumes
are in one Availability Zone (AZ), and can only be attached to instances also in that same AZ. Each
storage volume is automatically replicated within the same AZ. EBS can create point-in-time snapshots
of volumes, which are persisted to S3.
CloudWatch shows performance metrics for EBS volumes: bandwidth, throughput, latency, and queue depth
.
Price: $0.10 per GB-month of provisioned storage; $0.10 per 1 million I/O requests

S3

Definition: S3 provides a simple web interface that can be used to store and retrieve any amount of data, at
any time, from anywhere on the web (multiple AZ storage). You can write, read, and delete objects
containing from 1 byte to 5 terabytes of data each. The number of objects you can store is unlimited. Each
object is stored in a bucket and retrieved via a unique, developer-assigned key.
Price: $0.1 GB/month

12/5/12 6:52 PM

11/11/2013

2
6

Slide number 26

AWS Failure Modes and Effects [1]


Failure Mode

Probability

Mitigation Plan

Application Failure

High

Automatic degraded response

AWS Region Failure

Low

Wait for the region to recover

AWS Zone Failure

Medium

Continue to run on the remaining


zone

Data Storage Failure

Medium

Restore from S3 backup

S3 Failure

Low

Restore from remote archive


(disaster recovery)

Zone Failure Situations


1.

2.

Power Outage
1.
Instances lost
2.
Ephemeral storage unavailable; readily available after power restoration
3.
EBS Storage unavailable; not readily available after power restoration
Network Outage
1.
Instances unavailable
2.
Ephemeral storage unavailable
3.
EBS Storage unavailable; could be not readily available after network restoration

Region Failure Situations


1.

2.

Control Plane for creating new instances failure [2]


1.
New instances could not be created
2.
Lost control of remaining zones infrastructure
3.
EBS Storage unavailable; not readily available after power restoration
Network Outage
1.
Instances unavailable
2.
Ephemeral storage unavailable
3.
EBS Storage unavailable; could be not readily available after network restoration

[1] http://www.slideshare.net/adrianco/high-availability-architecture-at-netflix
[2] http://readwrite.com/2011/04/25/almost-as-galling-as-the#awesm=~ommLY1YhK9eiOz

11/11/2013

Slide number 27

Failover with GTID Replication and ZFS Snapshots


GTID Replication
sync_binlog = 1

Node2 (Slave)

Node1 (Master)

Node3

Master

Snap1
Slave

Failover
Slave

GTID Replication

GTID Replication

Application

Snap2
Slave

sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE

SSD

ZFS

SSD

Availability Zone 1

ZFS

Availability Zone 2

We use ZFS snapshots for the Master and Slave backups


We must have Slave node to make ZFS snapshots
Master server cannot stop without stopping all warehouse system

Failover cases
1) Service failures:
Node1 master process failure - service moves to node2
Node1 slave process failure service restarts

3) Network failures:
node1 network failure - services move to node2
node2 (slave) network failure restart services

11/11/2013

2) Node failures:
node1 failure - services move to node2
node2 failure restart node2

4) Server data corruption:


node1 master get the snapshot from the snap slave
node2 master get the snapshot from the snap slave
Slide number 28

High-Level Block Diagram of the Demo Script


Start main script
1.
2.
3.

Verify initial conditions


Destroy snapshot if exists
Stop and cleanup the servers: slave, snap and master

4.
5.
6.
7.

Start the servers: slave, snap and master


Setup GTID replication from master to snap
Setup GTID replication from master to main slave
Initialized and start local application

8.
9.
10.
11.

Prepare to take snapshot of the snap server


Flush tables with read lock
Take snapshot
Unlock tables on the snap server

12.
13.
14.
15.

Crash master
Crash application
Promote main slave to new master
Restart application and point it to the new master

16.
17.
18.

Destroy local volume


Send snapshot to the master to replace the original directory
Delete old UUID from old master directory

19.
20.

Demote old master to the new main slave


Setup replication from new master to new slave

21.

Continue to run and finish application


Stop main script

11/11/2013

Slide number 29

Summary

MySQL replication is the most popular High Availability solution

To increase server availability MySQL team introduced new features


Global Transaction Identifiers (GTIDs)
Server UUIDs
Crush Safe Slaves and Binary Logs
Replication Events Checksum

New features increased availability and allowed automation of


Failover procedure

11/11/2013

Slide number 30

Backup

11/11/2013

Slide number 31

Number of fixed bugs in MySQL 5.5 and 5.6 releases

First production release for MySQL 5.6.10 had 40% more bugs than first production release of MySQL 5.5.9
The number of bugs for subsequent release of MySQL 5.6 was significantly higher than for production release. In
case of 5.5 the situation was different
The number of bugs in 5.6. is still significantly higher than for similar situation with 5.5
11/11/2013

Slide number 32

Number of new/changed features in releases

The number of improvements for subsequent release of MySQL 5.6 was very similar to MySQL 5.5
subsequent releases

11/11/2013

Slide 33

Replication Binary Log Coordinates


mysql> SHOW SLAVE STATUS\G
*************************** 1.
Slave_IO_State:
Master_Host:
Master_User:
Master_Port:
Connect_Retry:
Master_Log_File:
Read_Master_Log_Pos:
Relay_Log_File:
Relay_Log_Pos:
Relay_Master_Log_File:
Slave_IO_Running:
Slave_SQL_Running:
. . . . . . . . . . . . . . .:
Last_Errno: 0 Last_Error:
Skip_Counter:
Exec_Master_Log_Pos:
Relay_Log_Space:
Until_Condition:
Until_Log_File:
Until_Log_Pos:
Master_SSL_Allowed:
. . . . . . . . . . . . . . .:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert:
Last_IO_Errno:
Last_IO_Error:
Last_SQL_Errno:
Last_SQL_Error:
1 row in set (0.00 sec)

row ***************************
Waiting for master to send event
127.0.0.1
msandbox
26768
60
mysql-bin.000001 (IO Thread reads this file)
4723 (Position in master binary log file where IO Thread has read to)
mysqld-relay-bin.000001
874 (Position in the relay log file where SQL thread read and executed events
mysql-relay-bin.000001
Yes
Yes
. . . . . .
0
729 (Position in master binary log file that SQL Thread read and executed up to
1042 The total combined size of all existing relay log files
None
0
No
. . .

Coordinates usage examples:

No
0

Connect to the master using masters binary log


slave> CHANGE MASTER TO
MASTER_HOST='12.34.56.789',MASTER_USER='slave_user',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=4723;

Failover
Master

Slave1

Slave2

11/11/2013

Master
Crashed!

Slave1

Connect to the new master/old slave using slaves relay log


slave> CHANGE MASTER TO
MASTER_HOST='12.34.56.789',MASTER_USER='slave_user',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-relay-bin.000001', MASTER_LOG_POS=729;

Slave2

Slide 34

What is the Kiva Mobile-Robotic Fulfillment System?


Internet Order Fulfillment Operation is the core of Amazon business
Kiva uses hundreds of mobile robotic drive units to bring inventory on mobile shelves directly to workers, allowing access to
all inventory items at all times
Kiva software is integrated with the client's enterprise systems, including: warehouse management systems (WMS), order
management systems (OMS), and enterprise resource planning systems (ERP)
Maintenance
Manager

ERP

WMS

OMS

Others

Product
Manager

Core Local Software


Client Agents

Plant
Manager

Inventory
Management

Transaction
Processing

Equipment
Control

Station Agents

API

Core Local Hardware


Admin
Manager

Drive Unit Agents

Current Kiva software runs locally


Database server HA provided by the local SAN storage and RedHat Cluster

How to provide Database Server High Availability when Kiva software and hardware
run in the Cloud?
11/11/2013

Slide number 35

RedHat Cluster and Master/Slave Nodes


RedHat Cluster monitors Master and Slave processes on Master and Slave nodes

RedHatCluster
Service master [status|start|stop]

Node1 (Master)
Master

11/11/2013

ZFS
Snap1
Slave

Service slave [status|start|stop]

Node2 (Slave)
Failover
slave

We use ZFS snapshots for the Master and Slave backups

We must have Slave node to make ZFS snapshots

Master server cannot stop without stopping all warehouse system

ZFS
Snap2
Slave

Slide number 36

Cloud HA Solution with GTID Replication


node1

RedHat Cluster

heartbeat

node2

RedHat Cluster

Main master-slave GTID replication (port=3306)

port=3306

port=3307

Main master

Master snap

port=3306
Main slave

Master snap
GTID
Replication
(port=3306)

Slave snap
GTID
Replication
(port=3306)

SSD1
Main master
Database

ZFS: Send
ZFS Snapshot

node1 State File

11/11/2013

port=3307
Slave snap

SSD2
Snap slave
Database

Main master
Database

ZFSSnapshot
Snapshot
ZFS
ZFS
Snapshot

ZFS Snapshot

Snap slave
Database

ZFS: Send

ZFSSnapshot
Snapshot
ZFS
ZFS
Snapshot

node2 State File

Slide number 37

Cloud HA Solution with GTID Replication


node1

RedHat Cluster

heartbeat

node2

RedHat Cluster

Main master-slave GTID replication (port=3306)

port=3306

port=3307

Main master

Master snap

port=3306
Main slave

Master snap
GTID
Replication
(port=3306)

Slave snap
GTID
Replication
(port=3306)

SSD1
Main master
Database

ZFS: Send
ZFS Snapshot

node1 State File

11/11/2013

port=3307
Slave snap

SSD2
Snap slave
Database

Main master
Database

ZFSSnapshot
Snapshot
ZFS
ZFS
Snapshot

ZFS Snapshot

Snap slave
Database

ZFS: Send

ZFSSnapshot
Snapshot
ZFS
ZFS
Snapshot

node2 State File

Slide number 38

Services, Nodes, Network and Corruption Failover Scenarios


Services
real
service

node1

node2

node3
RedHat Cluster

failover

failover

res
service
arch
service

failover

1) Service failures:

2) Node failures:

node1 real service failure - service moves to node2


node2 res service failure - service moves to node3
node1 arch service failure service moves to node3
node3 (slave) service failure service restarts
node3 (master) service failure service moves to node2 or node1

node1 failure - services move to node2 and node3


node2 failure - services move to node3
node3 (slave) failure restart node3
node3 (master) failure services move to node1 or node2

3) Network failures:

4) Server data corruption:

node1 network failure - services move to node2 and node3


node2 network failure - services move to node3
node3 (slave) network failure restart services
node3 (master) network failure services move to node2 or node3

node1 master get the snapshot from the snap slave


node2 master get the snapshot from the snap slave
node3 master get the snapshot from the snap slave

11/11/2013

Slide number
39
39

General Architecture of the System Failover


GTID Replication
sync_binlog = 1

node1
Master

Application

node2

Snap1
Slave

Failover
Slave

Snap2
Slave

GTID Replication

GTID Replication

sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE

SSD

ZFS snapshots acquisition using snap slave on node1

Snap2
Slave

11/11/2013

ZFS Snapshot

Replication

Replication

Failover
Slave

Replication

Master

Snap1
Slave

SSD

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

Slide number 40

Master (node1) works with snap slave(node1) and failover slave(node2)


Master

Master
crashes

Replication

Replication

ZFS Snapshot

1 min
Snap1
Slave

1 min

1 min

ZFS Snapshot ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

node1

Replication

Failover
Slave

Snap2
Slave

ZFS Snapshot

1 min

ZFS Snapshot

1 min

ZFS Snapshot ZFS Snapshot

1 min
ZFS Snapshot

ZFS Snapshot

node2
Master works with snap slave and failover slave:
1.
Snap1 slave and failover slave replicate from the master
2.
Snap1 slave takes ZFS snapshots every minute
3.
Failover slave has ZFS snapshots every few hours
4.
Master has ZFS snapshot every few days
5.
Snap2 slave takes ZFS snapshots every minute (symmetrical to snap1)
11/11/2013

sync_binlog = 1
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE

sync_binlog = 1
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE

sync_binlog = 1
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE

sync_binlog = 1
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE

Slide number 41

Recovery after master (node1) failure: node1 continues to work, failover slave works
as master

crash

Master
crashes 1 min

1 min

1 min

1 min

1 min

ZFS Snapshot ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

1 min

1 min

1 min

1 min

ZFS Snapshot ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

Slave
ZFS Snapshot

Replication

ZFS snapshot

node1

Master
Replication

ZFS Snapshot

Snap2
Slave

1 min

ZFS Snapshot

node2

Failover steps in case of the master crash:


1.
Master crashes, but the node1 continues to work
2.
Application redirected to node2 and failover slave becomes new master
3.
The latest ZFS snapshot replaces old master data directory
4.
Snap1 slave replicates from node2 catching up with new master
5.
Snap2 slave continues to get replicated data from new master
11/11/2013

Slide number 42

Recovery after master (node1) failure, node1 continues to work, failover slave works
as master
node1
New snap
slave

1 min

1 min

1 min

1 min

1 min

ZFS Snapshot ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

Slave
Replication

ZFS Snapshot

Replication

Master

Snap2
Slave

ZFS Snapshot

1 min

1 min

1 min

1 min

1 min

ZFS Snapshot ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

ZFS Snapshot

node2
Recovery steps in case of the master crash:
1.
Old master recovers, and catches up with new master using data from old snap1 slave
2.
Old snap1 slave becomes new failover slave
3.
Old master becomes new snap1 slave making frequent ZFS snapshots
4.
In case of new master crash new failover slave becomes new master
5.
node2 supposed to have the same architecture as node1 with node2 snap slave
11/11/2013

Slide number 43

Backup

11/11/2013

Slide number 44

Backup

11/11/2013

Slide number 45

Replication Data Files


Line
#

SHOW SLAVE STATUS


Column

11/11/2013

Description
Number of lines in the file

Master_Log_File

The name of the master binary log currently being read from the master

Read_Master_Log_Pos

Master_Host

The host name of the master

Master_User

The user name used to connect to the master

Password (not shown by


SHOW SLAVE STATUS)

The password used to connect to the master

Master_Port

Connect_Retry

The current position within the master binary log that have been read from the
master

The network port used to connect to the master


The period (in seconds) that the slave will wait before trying to reconnect to the
master

Master_SSL_Allowed

Indicates whether the server supports SSL connections

10

Master_SSL_CA_File

The file used for the Certificate Authority (CA) certificate

11

Master_SSL_CA_Path

The path to the Certificate Authority (CA) certificates

12

Master_SSL_Cert

13

Master_SSL_Cipher

The name of the SSL certificate file


The list of possible ciphers used in the handshake for the SSL connection

14

Master_SSL_Key

15

Master_SSL_Verify_Server_Cert

The name of the SSL key file

17

Replicate_Ignore_Server_Ids

Whether to verify the server certificate


The number of server IDs to be ignored, followed by the actual server IDs

Line
#

SHOW SLAVE STATUS


Column

Relay_Log_File

The name of the current relay log file

Relay_Log_Pos

The current position within the relay log file; events up to this position have been executed on the slave database

Relay_Master_Log_File

Exec_Master_Log_Pos

Description

The name of the master binary log file from which the events in the relay log file were read

The equivalent position within the master's binary log file of events that have already been executed

Slide number 46

How Replication Worked Before 5.6?

11/11/2013

Slide number 47

How Replication Worked Before 5.6?

11/11/2013

Slide number 48

You might also like