GTID Based Replication For MySQL High Availability 0570
GTID Based Replication For MySQL High Availability 0570
GTID-based Replication
for
MySQL High Availability
Jacob Nikom
Slide 1
Outline
11/11/2013
What is GTID?
How to Configure GTID Replication?
GTID Replication Basics
Coordinate Replication Failover
GTID Replication Failover
Summary
Slide number 2
11/11/2013
Slide number 3
A continuously available system does not allow planned outages, essentially supporting no downtime
operations
A fault tolerant system in case of a component failure has no service interruption (higher solution cost)
A highly available system has a minimal service interruption (lower solution cost)
11/11/2013
Slide number 4
System Failures
o
Hardware Faults
o
Software bugs or crashes
Physical Disasters
Scheduled Maintenance
User Errors
- Operating Environment
- Performance
- Replication
- Data Loss & Corruption
Effect:
Service Unavailability
Bad response time
35.1%
20.8%
34.4%
Impact:
11/11/2013
Revenue loss
Poor customer relationships
Reduced employee productivity
Regulatory issues
Slide number 5
36.5 days
72 hours
16.8 hours
3.65 days
7.20 hours
1.68 hours
8.76 hours
43.8 minutes
10.1 minutes
52.56 minutes
4.32 minutes
1.01 minutes
5.26 minutes
25.9 seconds
6.05 seconds
31.5 seconds
2.59 seconds
0.605 seconds
11/11/2013
Slide number 6
2.
The first way is less efficient the reliability of the system will be lower than the reliability of any individual component
The second way is more efficient - reliability of the system will be higher than the reliability of any individual component
Removing Single Point of Failures (SPOF)
#
Component
Technique
Explanation
Storage
RAID
Servers
Clustering
Power Supply
UPS
Network
Redundant routers
Location
Another Data
Center
High availability databases are essentially real-time systems or RTS. Sometimes they are even distributed RTS. That type of
systems are traditionally very difficult to deal with.
2.
Real-time data processing functionality (caches and dirty data logging) forces tight coupling between software and hardware
components. Therefore software redundancy requires redundancy of corresponding hardware as well.
3.
Real-time consistency between data stored on redundant components requires continuous and instantaneous
synchronization. This is difficult to implement without significant overhead.
11/11/2013
Slide number 7
HA Feature
MySQL Replication
DRBD
Platform Support
Linux
Supported Storage
Engine
Transactionality required
for GTIDs
InnoDB
NDB
Automatic Failover
Yes
Automatic failover in
about 1 minute with
InnoDB log
files of about 100 MB
1 second or less
Asynchronous+
Synchronous
Synchronous
Failover Time
5 second + InnoDB
Recovery time
Replication Mode
Asynchronous+
Semi-synchronous
Shared Storage
Number of Nodes
Active/Passive Master +
Multiple Slaves
Availability Level
99.9%
99.99%
99.999%
11/11/2013
Slide number 8
HA Solution
MySQL Replication
(before 5.6)
DRBD
Advantages
Simple
Inexpensive
Extends existing database architecture
All the servers can be used, no idle standby
Supports MyISAM
Caches on failover slave are not cold
Online schema changes
Low impact backups
99.9% availability
No data loss
Much higher write capacity
No SPOF with DRBD
Provides high availability and data integrity across
two servers in the event of hardware or system
failure.
Ensures data integrity by enforcing write
consistency on the primary and secondary nodes.
99.99% availability
MySQL NDB
Cluster
Disadvantages
Slide number 9
o
o
o
o
o
o
o
o
Available Features
Semi-sync replication.
Replication Heartbeats.
RBR type conversion.
Crash-safe Slaves.
Global Transaction Ids.
Replication Event Checksums.
Binary Log Group Commit.
Multi-threaded Slaves.
RBR enhanced.
MySQL Utilities 1.3, GA on August 2013
MySQL 5.1
MySQL 3.23
2001
11/11/2013
MySQL 5.0
MySQL
4.1
MySQL 4.0
2003
2005
2007
2009
2011
20013
Slide number 10
Server UUIDs
Data Integrity:
11/11/2013
Performance:
Multi-Threaded Slaves
Database Operations:
Replication Utilities
Time-Delayed Replication
Slide number 11
Client
Slave
5
2
Commit
Master
Return to Client
14
7
1
3
9
Execute
Dump Thread
10
11/11/2013
IO Thread
11
13
SQL Thread
12
1 mysqld-bin.index
1
binary logs
6 master.info
6 mysqld-relay-bin.index
15
6 relay-log.info
relay logs
Slide number 12
[root@node1 data]# ls -l
-rw-r-----rw-r-----rw-r----drwx------rw-rw----rw-rw----rw-rw----rw-rw----rw-r----drwx-----drwx------
1
1
1
2
1
1
1
1
1
2
2
mysql
mysql
mysql
mysql
mysql
mysql
mysql
mysql
mysql
mysql
mysql
[root@node1 data]#
[root@node1 data]#
-rw-r----- 1 mysql
-rw-r----- 1 mysql
-rw-r----- 1 mysql
-rw-rw---- 1 mysql
drwx------ 2 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
-rw-rw---- 1 mysql
drwx------ 2 mysql
drwx------ 2 mysql
ls l
mysql 144703488 Oct 27 19:47 ibdata1
mysql 67108864 Oct 27 19:47 ib_logfile0
mysql 67108864 Oct 27 19:47 ib_logfile1
mysql
60 Oct 22 22:31 master.info
mysql
81 May 20 23:21 mysql
mysql
6 Oct 22 22:31 mysqld.pids
mysql
205 Oct 22 22:31 mysqld-relay-bin.000001
mysql
526 Oct 22 22:33 mysqld-relay-bin.000002
mysql
52 Oct 22 22:31 mysqld-relay-bin.index
root
11309 Oct 22 22:31 mysql-error.err
mysql
58 Oct 22 22:31 relay-log.info
mysql
55 May 20 23:21 performance_schema
mysql
2 Oct 22 22:33 test
File mysqld-bin.index
File mysqld-relay-bin.index
/usr/local/mysql/data/mysqld-bin.000001
/usr/local/mysql/data/mysqld-bin.000002
/usr/local/mysql/data/mysqld-relay-bin.000001
/usr/local/mysql/data/mysqld-relay-bin.000002
[root@node1 data]#
[root@node1 data]#
Transactional group
11/11/2013
COMMIT
BEGIN
Ev1
server_id
Ev2
server_id
Ev1
server_id
BEGIN
server_id
Ev2
COMMIT
Transactional group
Slide number 13
relay-log.info
11/11/2013
1 ./mysqld-relay-bin.000001
2 874
3 mysql-bin.000001
4 729
Slide number 14
57
67
76
77
11/11/2013
56
57
Slide number 15
row ***************************
Waiting for master to send event
127.0.0.1
master_user
26768
60
mysql-bin.000001 (IO Thread reads this file)
4723 (Position in master binary log file where IO Thread has read to)
mysqld-relay-bin.000001
874 (Position in the relay log file where SQL thread read and executed events
mysql-relay-bin.000001
Yes
Yes
. . . . . .
0
729 (Position in master binary log file that SQL Thread read and executed up to
1042 The total combined size of all existing relay log files
None
0
No
. . .
No
0
Failover
Master
Slave1
Slave2
11/11/2013
Master
Crashed!
Slave1
Slave2
Slide 16
Advantages
Drawbacks
11/11/2013
Additional complexity
Incompatibility with existing solution coordinate based replication
Slide number 17
What is GTID?
Where GTID comes from?
# ls -l /usr/local/mysql/data
total 537180
-rw-r----- 1 mysql mysql
56
drwx------ 2 mysql mysql
4096
-rw-r----- 1 mysql mysql 348127232
-rw-rw---- 1 mysql mysql 100663296
-rw-rw---- 1 mysql mysql 100663296
drwx------ 2 mysql mysql
32768
drwx------ 2 mysql mysql
4096
-rw-rw---- 1 mysql mysql
6
-rw-r----- 1 mysql root
9131
drwx------ 2 mysql mysql
4096
drwxr-xr-x 2 mysql mysql
4096
Oct
Oct
Oct
Oct
Oct
Oct
Oct
Oct
Oct
Oct
Oct
17
17
17
17
17
17
17
17
17
17
17
10:49
10:49
11:58
11:58
11:24
10:55
10:49
11:58
11:58
10:49
10:49
auto.cnf
bench/
ibdata1
ib_logfile0
ib_logfile1
mhs/
mysql/
mysqld.pids
mysql-error.err
performance_schema/
test/
965d996a-fea7-11e2-ba15-001e4fb6d589:1
BEGIN
Ev1
Ev2
Transactional group
11/11/2013
COMMIT GTID
BEGIN
Ev1
Ev2
COMMIT
Transactional group
Slide number 18
gtid_mode
It could be ON or OFF (not 1 or 0)
It enables the GTID on the server
log_bin (existed)
Enables binary logs
Mandatory to create a replication
log-slave-updates
Slave servers must log its changes
Needed for server
promotion/demotion
enforce-gtid-consistency
Forces the server to be safe by
using only transactional tables
Non-transactional statements are
denied by the server.
row ***************************
Waiting for master to send event
node1
root
3306
60
mysqld-bin.000002
354
mysqld-relay-bin.000002
526
mysqld-bin.000002
Yes
Yes
0
0
354
731
None
0
No
0
No
0
0
28
b9ff49a4-3b50-11e3-85a5-12313d2d286c
mysql.slave_master_info
0
NULL
Slave has read all relay log; waiting for the slave I/O thread
86400
b9ff49a4-3b50-11e3-85a5-12313d2d286c:2
b9ff49a4-3b50-11e3-85a5-12313d2d286c:1-2
1
11/11/2013
Slide number 19
Binary log
Database
Server
0EB3E4DB-4C31-42E6-9F55-EEBBD608511C:1
0EB3E4DB-4C31-42E6-9F55-EEBBD608511C:2
4D8B564F-03F4-4975-856A-0E65C3105328:1
0EB3E4DB-4C31-42E6-9F55-EEBBD608511C:3
4D8B564F-03F4-4975-856A-0E65C3105328:2
Slide number 20
master>
slave>
SELECT @@GLOBAL.GTID_EXECUTED;
+------------------------------------------------+
| @@GLOBAL.GTID_EXECUTED
|
+------------------------------------------------+
| 4D8B564F-03F4-4975-856A-0E65C3105328:1-1000000 |
+------------------------------------------------+
SELECT @@GLOBAL.GTID_EXECUTED;
+-----------------------------------------------+
| @@GLOBAL.GTID_EXECUTED
|
+-----------------------------------------------+
| 4D8B564F-03F4-4975-856A-0E65C3105328:1-999999 |
+-----------------------------------------------+
It is easy to find:
When slave connects to the master, it sends the range of GTIDs that slave has executed and committed
In response the master sends all other transactions, i.e. those that the slave has not yet executed
Binary
log
Id1
Trx1
Id2
Trx2
Id3
Trx3
Binary
log
Id1:Trx1, Id2:Trx2
Master
Id3:Trx3
Slave
Id1
Trx1
Id2
Trx2
SQL command to tell the server to use the new protocol is: CHANGE MASTER TO MASTER_AUTO_POSITION = 1;
11/11/2013
Slide number 21
Find new master binary log coordinates (file name and position) using SHOW MASTER STATUS command
Switch to the new master using CHANGE MASTER TO MASTER_HOST command using new master
binary log coordinates
Client
Master
Slave1
+------------------+----------+--------------+------------------+
| File
| Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 | 12345
|
|
|
+------------------+----------+--------------+------------------+
Binary log
File: mysql-bin.000007
Position: 345
11/11/2013
Slave2
Binary log
File: mysql-bin.000006
Position: 23456
Slave3
Relay log
File: mysql-relay-bin.000008
Position: 5678
Slide number 22
Client
Master
Slave1
Binary log
Executed_Gtid_set:
5ffd0c1b-cd65-12c4-21b2-ab91a9429562:1-500
11/11/2013
Slave2
Binary log
Executed_Gtid_set:
5ffd0c1b-cd65-12c4-21b2-ab91a9429562:1-400
Slave3
Binary log
Executed_Gtid_set:
5ffd0c1b-cd65-12c4-21b2-ab91a9429562:1-300
Slide number 23
Country
Region
State/City
AZ
USA
US-West
Oregon
A,B
USA
US-West
California
A,B,C
USA
US-East
Virginia
A,B,C,D,E
Brazil
San Paulo
San Paulo
A,B
Ireland
EU
Dublin
A,B,C
Japan
Asia-Pacific
Tokyo
A,B
Singapore
Asia-Pacific
Singapore
A,B
11/11/2013
07-24-2012
Connection
points
Connectivity
Type
Average
Latencies[1],
[2]
Region-toAnother-Region
WAN
100 500 ms
AZ-to-Another-AZ
LAN
10-50 ms
AZ-to-Same-AZ
LAN
2 - 10 ms
Slide number 24
AMI
Directly attached
Ephemeral
Storage
Definition: EC2 instance is a server running MHS application using Amazon Machine Image (AMI) software.
Properties:
1.
2.
The server can fail due to own hardware problems or due to AZ outage.
Performance varies up to 60% between instance of the same type.
EC2 instance
Definition: RDS is instance of MySQL server running on an EC2 platform. Persistent storage (for back-ups, etc.) is
allocated in EBS volumes. However, neither can you access the underlying EC2 instance nor can you use S3 to
access your stored database snapshots. Since you do not get access to the native EC2 instance, you cannot install
additional software on the MySQL host.
Network attached
Multi-AZ deployment - RDS automatically provisions and manages a standby replica in a different AZ. Database
updates are made synchronously on the primary and standby resources to prevent replication lag. In the event of
planned database maintenance, DB Instance failure, or an AZ failure, RDS automatically failovers to the up-to-date
standby so that database operations can resume quickly without administrative intervention. Prior to failover you
cannot directly access the standby, and it cannot be used to serve read traffic.
RDS
Read Replicas You can create replicas of a given source DB Instance that serve high-volume application read
traffic from multiple copies of your data. RDS uses MySQLs asynchronous replication to propagate changes made to
a source DB Instance to any associated Read Replicas.
Price: $0.4 - $0.8 per hour
12/5/12 6:52 PM
11/11/2013
2
5
Slide number 25
Definition: EBS provides block level storage volumes for use with EC2 instances. The volumes are networkattached, and persist independently from the life of an instance that it is attached to
Performance: HDD 0.1 ms; network latency 2 ms
Properties:
1.
Good for short and medium term persistence
2.
The performance varies with out the Provisioned IOPS and Optimized instances (not all
instances)
3.
Throughput is hared with other instances
4.
Rely on network for its access
5.
After an accidental reboot, like power outage, the content of the storage remains intact.
However, the
availability could be impacted by the network overloading with multi-tenant recoveries
EBS provisions a specific level of I/O performance by choosing a Provisioned IOPS volume. EBS volumes
are in one Availability Zone (AZ), and can only be attached to instances also in that same AZ. Each
storage volume is automatically replicated within the same AZ. EBS can create point-in-time snapshots
of volumes, which are persisted to S3.
CloudWatch shows performance metrics for EBS volumes: bandwidth, throughput, latency, and queue depth
.
Price: $0.10 per GB-month of provisioned storage; $0.10 per 1 million I/O requests
S3
Definition: S3 provides a simple web interface that can be used to store and retrieve any amount of data, at
any time, from anywhere on the web (multiple AZ storage). You can write, read, and delete objects
containing from 1 byte to 5 terabytes of data each. The number of objects you can store is unlimited. Each
object is stored in a bucket and retrieved via a unique, developer-assigned key.
Price: $0.1 GB/month
12/5/12 6:52 PM
11/11/2013
2
6
Slide number 26
Probability
Mitigation Plan
Application Failure
High
Low
Medium
Medium
S3 Failure
Low
2.
Power Outage
1.
Instances lost
2.
Ephemeral storage unavailable; readily available after power restoration
3.
EBS Storage unavailable; not readily available after power restoration
Network Outage
1.
Instances unavailable
2.
Ephemeral storage unavailable
3.
EBS Storage unavailable; could be not readily available after network restoration
2.
[1] http://www.slideshare.net/adrianco/high-availability-architecture-at-netflix
[2] http://readwrite.com/2011/04/25/almost-as-galling-as-the#awesm=~ommLY1YhK9eiOz
11/11/2013
Slide number 27
Node2 (Slave)
Node1 (Master)
Node3
Master
Snap1
Slave
Failover
Slave
GTID Replication
GTID Replication
Application
Snap2
Slave
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE
SSD
ZFS
SSD
Availability Zone 1
ZFS
Availability Zone 2
Failover cases
1) Service failures:
Node1 master process failure - service moves to node2
Node1 slave process failure service restarts
3) Network failures:
node1 network failure - services move to node2
node2 (slave) network failure restart services
11/11/2013
2) Node failures:
node1 failure - services move to node2
node2 failure restart node2
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Crash master
Crash application
Promote main slave to new master
Restart application and point it to the new master
16.
17.
18.
19.
20.
21.
11/11/2013
Slide number 29
Summary
11/11/2013
Slide number 30
Backup
11/11/2013
Slide number 31
First production release for MySQL 5.6.10 had 40% more bugs than first production release of MySQL 5.5.9
The number of bugs for subsequent release of MySQL 5.6 was significantly higher than for production release. In
case of 5.5 the situation was different
The number of bugs in 5.6. is still significantly higher than for similar situation with 5.5
11/11/2013
Slide number 32
The number of improvements for subsequent release of MySQL 5.6 was very similar to MySQL 5.5
subsequent releases
11/11/2013
Slide 33
row ***************************
Waiting for master to send event
127.0.0.1
msandbox
26768
60
mysql-bin.000001 (IO Thread reads this file)
4723 (Position in master binary log file where IO Thread has read to)
mysqld-relay-bin.000001
874 (Position in the relay log file where SQL thread read and executed events
mysql-relay-bin.000001
Yes
Yes
. . . . . .
0
729 (Position in master binary log file that SQL Thread read and executed up to
1042 The total combined size of all existing relay log files
None
0
No
. . .
No
0
Failover
Master
Slave1
Slave2
11/11/2013
Master
Crashed!
Slave1
Slave2
Slide 34
ERP
WMS
OMS
Others
Product
Manager
Plant
Manager
Inventory
Management
Transaction
Processing
Equipment
Control
Station Agents
API
How to provide Database Server High Availability when Kiva software and hardware
run in the Cloud?
11/11/2013
Slide number 35
RedHatCluster
Service master [status|start|stop]
Node1 (Master)
Master
11/11/2013
ZFS
Snap1
Slave
Node2 (Slave)
Failover
slave
ZFS
Snap2
Slave
Slide number 36
RedHat Cluster
heartbeat
node2
RedHat Cluster
port=3306
port=3307
Main master
Master snap
port=3306
Main slave
Master snap
GTID
Replication
(port=3306)
Slave snap
GTID
Replication
(port=3306)
SSD1
Main master
Database
ZFS: Send
ZFS Snapshot
11/11/2013
port=3307
Slave snap
SSD2
Snap slave
Database
Main master
Database
ZFSSnapshot
Snapshot
ZFS
ZFS
Snapshot
ZFS Snapshot
Snap slave
Database
ZFS: Send
ZFSSnapshot
Snapshot
ZFS
ZFS
Snapshot
Slide number 37
RedHat Cluster
heartbeat
node2
RedHat Cluster
port=3306
port=3307
Main master
Master snap
port=3306
Main slave
Master snap
GTID
Replication
(port=3306)
Slave snap
GTID
Replication
(port=3306)
SSD1
Main master
Database
ZFS: Send
ZFS Snapshot
11/11/2013
port=3307
Slave snap
SSD2
Snap slave
Database
Main master
Database
ZFSSnapshot
Snapshot
ZFS
ZFS
Snapshot
ZFS Snapshot
Snap slave
Database
ZFS: Send
ZFSSnapshot
Snapshot
ZFS
ZFS
Snapshot
Slide number 38
node1
node2
node3
RedHat Cluster
failover
failover
res
service
arch
service
failover
1) Service failures:
2) Node failures:
3) Network failures:
11/11/2013
Slide number
39
39
node1
Master
Application
node2
Snap1
Slave
Failover
Slave
Snap2
Slave
GTID Replication
GTID Replication
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE
SSD
Snap2
Slave
11/11/2013
ZFS Snapshot
Replication
Replication
Failover
Slave
Replication
Master
Snap1
Slave
SSD
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
Slide number 40
Master
crashes
Replication
Replication
ZFS Snapshot
1 min
Snap1
Slave
1 min
1 min
ZFS Snapshot
ZFS Snapshot
node1
Replication
Failover
Slave
Snap2
Slave
ZFS Snapshot
1 min
ZFS Snapshot
1 min
1 min
ZFS Snapshot
ZFS Snapshot
node2
Master works with snap slave and failover slave:
1.
Snap1 slave and failover slave replicate from the master
2.
Snap1 slave takes ZFS snapshots every minute
3.
Failover slave has ZFS snapshots every few hours
4.
Master has ZFS snapshot every few days
5.
Snap2 slave takes ZFS snapshots every minute (symmetrical to snap1)
11/11/2013
sync_binlog = 1
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE
sync_binlog = 1
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE
sync_binlog = 1
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE
sync_binlog = 1
sync_master_info = 1
sync_relay_log = 1
innodb_support_xa = 1
master_info_repisitory=TABLE
relay_log_info_repository = TABLE
log-slave-updates =TRUE
Slide number 41
Recovery after master (node1) failure: node1 continues to work, failover slave works
as master
crash
Master
crashes 1 min
1 min
1 min
1 min
1 min
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
1 min
1 min
1 min
1 min
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
Slave
ZFS Snapshot
Replication
ZFS snapshot
node1
Master
Replication
ZFS Snapshot
Snap2
Slave
1 min
ZFS Snapshot
node2
Slide number 42
Recovery after master (node1) failure, node1 continues to work, failover slave works
as master
node1
New snap
slave
1 min
1 min
1 min
1 min
1 min
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
Slave
Replication
ZFS Snapshot
Replication
Master
Snap2
Slave
ZFS Snapshot
1 min
1 min
1 min
1 min
1 min
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
ZFS Snapshot
node2
Recovery steps in case of the master crash:
1.
Old master recovers, and catches up with new master using data from old snap1 slave
2.
Old snap1 slave becomes new failover slave
3.
Old master becomes new snap1 slave making frequent ZFS snapshots
4.
In case of new master crash new failover slave becomes new master
5.
node2 supposed to have the same architecture as node1 with node2 snap slave
11/11/2013
Slide number 43
Backup
11/11/2013
Slide number 44
Backup
11/11/2013
Slide number 45
11/11/2013
Description
Number of lines in the file
Master_Log_File
The name of the master binary log currently being read from the master
Read_Master_Log_Pos
Master_Host
Master_User
Master_Port
Connect_Retry
The current position within the master binary log that have been read from the
master
Master_SSL_Allowed
10
Master_SSL_CA_File
11
Master_SSL_CA_Path
12
Master_SSL_Cert
13
Master_SSL_Cipher
14
Master_SSL_Key
15
Master_SSL_Verify_Server_Cert
17
Replicate_Ignore_Server_Ids
Line
#
Relay_Log_File
Relay_Log_Pos
The current position within the relay log file; events up to this position have been executed on the slave database
Relay_Master_Log_File
Exec_Master_Log_Pos
Description
The name of the master binary log file from which the events in the relay log file were read
The equivalent position within the master's binary log file of events that have already been executed
Slide number 46
11/11/2013
Slide number 47
11/11/2013
Slide number 48