diff --git a/doc/multimaster.xml b/doc/multimaster.xml
index 30902b2331..4f0c5c35b7 100644
--- a/doc/multimaster.xml
+++ b/doc/multimaster.xml
@@ -58,7 +58,7 @@
typically need more than five cluster nodes. Three cluster nodes are
enough to ensure high availability in most cases.
There is also a special 2+1 (referee) mode in which 2 nodes hold data and
- an additional one called referee only participates in voting. Compared to traditional three
+ an additional one called referee only participates in voting. Compared to traditional three
nodes setup, this is cheaper (referee resources demands are low) but availability
is decreased. For details, see .
@@ -200,7 +200,7 @@
multimaster uses
logical replication
and the two phase commit protocol with transaction outcome determined by
- Paxos consensus algorithm.
+ Paxos consensus algorithm.
When PostgreSQL loads the multimaster shared
@@ -318,9 +318,9 @@
integrity, the decision to exclude or add back node(s) must be taken
coherently. Generations which represent a subset of
currently supposedly live nodes serve this
- purpose. Technically, generation is a pair <n, members>
- where n is unique number and
- members is subset of configured nodes. A node always
+ purpose. Technically, generation is a pair <n, members>
+ where n is unique number and
+ members is subset of configured nodes. A node always
lives in some generation and switches to the one with higher number as soon
as it learns about its existence; generation numbers act as logical
clocks/terms/epochs here. Each transaction is stamped during commit with
@@ -331,15 +331,15 @@
resides in generation in one of three states (can be shown with mtm.status()):
- ONLINE: node is member of the generation and
- making transactions normally;
+ ONLINE: node is member of the generation and
+ making transactions normally;
- RECOVERY: node is member of the generation, but it
- must apply in recovery mode transactions from previous generations to become ONLINE;
+ RECOVERY: node is member of the generation, but it
+ must apply in recovery mode transactions from previous generations to become ONLINE;
- DEAD: node will never be ONLINE in this generation;
+ DEAD: node will never be ONLINE in this generation;
@@ -374,7 +374,7 @@
The reconnected node selects a cluster node which is
- ONLINE in the highest generation and starts
+ ONLINE in the highest generation and starts
catching up with the current state of the cluster based on the
Write-Ahead Log (WAL).
@@ -480,7 +480,7 @@
Performs Paxos to resolve unfinished transactions.
This worker is only active during recovery or when connection with other nodes was lost.
- There is a single worker per PostgreSQL instance.
+ There is a single worker per PostgreSQL instance.
@@ -489,7 +489,7 @@
Ballots for new generations to exclude some node(s) or add myself.
- There is a single worker per PostgreSQL instance.
+ There is a single worker per PostgreSQL instance.
@@ -745,9 +745,9 @@ SELECT * FROM mtm.nodes();
algorithm to determine whether the cluster nodes have a quorum: a cluster
can only continue working if the majority of its nodes are alive and can
access each other. Majority-based approach is pointless for two nodes
- cluster: if one of them fails, another one becomes unaccessible. There is
- a special 2+1 or referee mode which trades less harware resources by
- decreasing availabilty: two nodes hold full copy of data, and separate
+ cluster: if one of them fails, another one becomes inaccessible. There is
+ a special 2+1 or referee mode which trades less hardware resources by
+ decreasing availability: two nodes hold full copy of data, and separate
referee node participates only in voting, acting as a tie-breaker.
@@ -758,7 +758,7 @@ SELECT * FROM mtm.nodes();
grant - this allows the node to get it in its turn later. While the grant is
issued, it can't be given to another node until full generation is elected
and excluded node recovers. This ensures data loss doesn't happen by the
- price of availabilty: in this setup two nodes (one normal and one referee)
+ price of availability: in this setup two nodes (one normal and one referee)
can be alive but cluster might be still unavailable if the referee winner
is down, which is impossible with classic three nodes configuration.
@@ -902,8 +902,7 @@ SELECT * FROM mtm.nodes();
Adding New Nodes to the ClusterWith the multimaster extension, you can add or
drop cluster nodes. Before adding node, stop the load and ensure (with
- mtm.status() that all nodes (except the ones to be
- dropped) are online.
+ mtm.status()) that all nodes are online.
When adding a new node, you need to load all the data to this node using
pg_basebackup from any cluster node, and then start this node.
@@ -955,7 +954,7 @@ pg_basebackup -D datadir -h node1 -U mtmuser -c fast
Configure the new node to boot with recovery_target=immediate to prevent redo
- past the point where replication will begin. Add to postgresql.conf
+ past the point where replication will begin. Add to postgresql.conf:
restore_command = 'false'
@@ -990,7 +989,7 @@ SELECT mtm.join_node(4, '0/12D357F0');
Removing Nodes from the Cluster
Before removing node, stop the load and ensure (with
- mtm.status() that all nodes (except the ones to be
+ mtm.status()) that all nodes (except the ones to be
dropped) are online. Shut down the nodes you are going to remove.
To remove the node from the cluster: