Previous Page
Next Page

9.7. MySQL Cluster Replication

Prior to MySQL 5.1.6, asynchronous replication, more usually referred to simply as "replication," was not available when using MySQL Cluster. MySQL 5.1.6 introduces master-slave replication of this type for MySQL Cluster databases. This section explains how to set up and manage a configuration wherein one group of computers operating as a MySQL Cluster replicates to a second computer or group of computers. We assume some familiarity on the part of the reader with standard MySQL replication as discussed elsewhere in this manual. (See Chapter 5, "Replication.")

Normal (non-clustered) replication involves a "master" server and a "slave" server, the master being the source of the operations and data to be replicated and the slave being the recipient of these. In MySQL Cluster, replication is conceptually very similar but can be more complex in practice, because it may be extended to cover a number of different configurations including replicating between two complete clusters. Although a MySQL Cluster itself depends on the NDB Cluster storage engine for clustering functionality, it is not necessary to use the Cluster storage engine on the slave. However, for maximum availability, it is possible to replicate from one MySQL Cluster to another, and it is this type of configuration that we discuss, as shown in Figure 9.5.

Figure 9.5. MySQL Cluster-to-Cluster replication layout.

In this scenario, the replication process is one in which successive states of a master cluster are logged and saved to a slave cluster. This process is accomplished by a special thread known as the NDB binlog injector thread, which runs on each MySQL server and produces a binary log (binlog). This thread ensures that all changes in the cluster producing the binary logand not just those changes that are effected via the MySQL Serverare inserted into the binary log with the correct serialization order. We refer to the MySQL replication master and replication slave servers as "replication servers" or "replication nodes," and the data flow or line of communication between them as a "replication channel."

9.7.1. Abbreviations and Symbols

Throughout this section, we use the following abbreviations or symbols for referring to the master and slave clusters, and to processes and commands run on the clusters or cluster nodes:

Symbol or Abbreviation

Description (Refers to...)


The cluster serving as the (primary) replication master


The cluster acting as the (primary) replication slave


Shell command to be issued on the master cluster


MySQL client command issued on a single MySQL server running as an SQL node on the master cluster


MySQL client command to be issued on all SQL nodes participating in the replication master cluster


Shell command to be issued on the slave cluster


MySQL client command issued on a single MySQL server running as an SQL node on the slave cluster


MySQL client command to be issued on all SQL nodes participating in the replication slave cluster


Primary replication channel


Secondary replication channel


Secondary replication master


Secondary replication slave

9.7.2. Assumptions and General Requirements

A replication channel requires two MySQL servers acting as replication servers (one each for the master and slave). For example, this means that in the case of a replication setup with two replication channels (to provide an extra channel for redundancy), there will be a total of four replication nodes, two per cluster.

Each MySQL server used for replication in either cluster must be uniquely identified among all the MySQL replication servers participating in either cluster (you cannot have replication servers on both the master and slave clusters sharing the same ID). This can be done by starting each SQL node using the --server-id=id option, where id is a unique integer. Although it is not strictly necessary, we will assume for purposes of this discussion that all MySQL installations are the same version.

In any event, servers involved in replication must be compatible with one another with respect to both the version of the replication protocol used and the SQL feature sets they support; the simplest and easiest way to assure that this is the case is to use the same MySQL version for all servers involved. Note that in many cases it is not possible to replicate to a slave running a version of MySQL with a lower version number than that of the mastersee Section 5.6, "Replication Compatibility Between MySQL Versions," for details.

We assume that the slave server or cluster is dedicated to replication of the master, and that no other data is being stored on it.

9.7.3. Known Issues

The following are known problems or issues when using replication with MySQL Cluster in MySQL 5.1:

  • The use of data definition statements, such as CREATE TABLE, DROP TABLE, and ALTER TABLE, are recorded in the binary log for only the MySQL server on which they are issued.

  • A MySQL server involved in replication should be started or restarted after using ndb_restore to discover and setup replication of NDB Cluster tables. Alternatively, you can issue a SHOW TABLES statement on all databases in the cluster.

    Similarly, when using CREATE SCHEMA, the new database is not automatically discoverable by the MySQL server. Thus, this statement must be issued on each MySQL server participating in the cluster when creating a new database.

  • Restarting the cluster with the --initial option will cause the sequence of GCI and epoch numbers to start over from 0. (This is generally true of MySQL Cluster and not limited to replication scenarios involving Cluster.) The MySQL servers involved in replication should in this case be replicated. After this, you should use the RESET MASTER and RESET SLAVE statements to clear the invalid binlog_index and apply_status tables. respectively.

See Section, "Initiating Discovery of Schema Changes," for more information about the first two items listed above, as well as some examples illustrating how to handle applicable situations.

9.7.4. Replication Schema and Tables

Replication in MySQL Cluster makes use of a number of dedicated tables in a separate cluster_replication database on each MySQL Server instance acting as an SQL node in both the cluster being replicated and the replication slave (whether the slave is a single server or a cluster). This database, which is created during the MySQL installation process by the mysql_install_db script, contains a table for storing the binary log's indexing data. Because the binlog_index table is local to each MySQL server and does not participate in clustering, it uses the MyISAM storage engine, and so must be created separately on each mysqld participating in the master cluster. This table is defined as follows:

CREATE TABLE `binlog_index` (
          `Position`  BIGINT(20) UNSIGNED NOT NULL,
          `File`      VARCHAR(255) NOT NULL,
          `epoch`     BIGINT(20) UNSIGNED NOT NULL,
          `inserts`   BIGINT(20) UNSIGNED NOT NULL,
          `updates`   BIGINT(20) UNSIGNED NOT NULL,
          `deletes`   BIGINT(20) UNSIGNED NOT NULL,
          `schemaops` BIGNINT(20) UNSIGNED NOT NULL,
          PRIMARY KEY (`epoch`)

Figure 9.6 shows the relationship of the MySQL Cluster replication master server, its binlog injector thread, and the cluster_replication.binlog_index table.

Figure 9.6. The replication master cluster, the binlog-injector thread, and the binlog_index table.

An additional table, named apply_status, is used to keep a record of the operations that have been replicated from the master to the slave. Unlike the case with binlog_index, the data in this table is not specific to any one SQL node in the (slave) cluster, and so apply_status can use the NDB Cluster storage engine, as shown here:

CREATE TABLE `apply_status` (
     `server_id` INT(10) UNSIGNED NOT NULL,
     `epoch`     BIGINT(20) UNSIGNED NOT NULL,
     PRIMARY KEY  USING HASH (`server_id`)

The binlog_index and apply_status tables are created in a separate database because they should not be replicated. No user intervention is normally required to create or maintain either of them. Both the binlog_index and the apply_status tables are maintained by the NDB injector thread. This keeps the master mysqld process updated to changes performed by the NDB storage engine. The NDB binlog injector thread receives events directly from the NDB storage engine. The NDB injector is responsible for capturing all the data events within the cluster, and ensures that all events changing, inserting, or deleting data are recorded in the binlog_index table. The slave I/O thread will transfer the from the master's binary log to the slave's relay log.

However, it is advisable to check for the existence and integrity of these tables as an initial step in preparing a MySQL Cluster for replication. It is possible to view event data recorded in the binary log by querying the cluster_replication.binlog_index table directly on the master. This can be also be accomplished using the SHOW BINLOG EVENTS statement on either the replication master or slave MySQL servers.

9.7.5. Preparing the Cluster for Replication

Preparing the MySQL Cluster for replication consists of the following steps:

Check all MySQL servers for version compatibility (see Section 9.7.2, "Assumptions and General Requirements").

Create a slave account on the master Cluster with the appropriate privileges:

     ->    ON *.* TO 'slave_user'@'slave_host'
     ->    IDENTIFIED BY 'slave_password';

where slave_user is the slave account username, slave_host is the hostname or IP address of the replication slave, and slave_password is the password to assign to this account.

For example, to create a slave user account with the name "myslave," logging in from the host named "rep-slave," and using the password "53cr37," use the following GRANT statement:

     -> ON *.* TO 'myslave'@'rep-slave'
     -> IDENTIFIED BY '53cr37' ;

For security reasons, it is preferable to use a unique user accountnot employed for any other purposefor the replication slave account.

Configure the slave to use the master. Using the MySQL Monitor, this can be accomplished with the CHANGE MASTER TO statement:

     -> MASTER_HOST='master_host',
     -> MASTER_PORT=master_port,
     -> MASTER_USER='slave_user',
     -> MASTER_PASSWORD='slave_password';

where master_host is the hostname or IP address of the replication master, master_port is the port for the slave to use for connecting to the master, slave_user is the username set up for the slave on the master, and slave_password is the password set for that user account in the previous step.

For example, to tell the slave to replicate from the MySQL server whose hostname is "rep-master," using the replication slave account created in the previous step, use the following statement:

     -> MASTER_HOST='rep-master'
     -> MASTER_PORT=3306,
     -> MASTER_USER='myslave'
     -> MASTER_PASSWORD='53cr37';

You can also configure the slave to use the master by setting the corresponding startup options in the slave server's my.cnf file. To configure the slave in the same way as the preceding example CHANGE MASTER TO statement, the following information would need to be included in the slave's my.cnf file:


See Section 5.9, "Replication Startup Options," for additional options that can be set in my.cnf for replication slaves.

Note: To provide replication backup capability, you will also need to add an ndbconnectstring option to the slave's my.cnf file prior to starting the replication process. See Section 9.7.9, "MySQL Cluster Backups with Replication," for details.

If the master cluster is already in use, you can create a backup of the master and load this onto the slave to cut down on the amount of time required for the slave to synchronize itself with the master. If the slave is also running MySQL Cluster, this can be accomplished using the backup and restore procedure described in Section 9.7.9, "MySQL Cluster Backups with Replication,"


In the event that you are not using MySQL Cluster on the replication slave, you can create a backup with this command on the replication master:

shell M> mysqldump --master-data=1

Then import the resulting data dump onto the slave by copying the dump file over to the slave. After this, you can use the mysql client to import the data from the dumpfile into the slave database as shown here, where dump_file is the name of the file that was generated using mysqldump on the master, and db_name is the name of the database to be replicated:

shell S> mysql -u root -p db_name<dump_file

For a complete list of options to use with mysqldump, see Section 7.10, "mysqldumpA Database Backup Program."

Note that if you copy the data to the slave in this fashion, you should make sure that the slave is started with the --skip-slave-start option on the command line, or else include skip-slave-start in the slave's my.cnf file to keep it from trying to connect to the master to begin replicating before all the data has been loaded. Once the loading of data has completed, follow the additional steps outlined in the next two sections.

Ensure that each MySQL server acting as a replication master is configured with a unique server ID, and with binary logging enabled, using the row format. (See Section 5.3, "Row-Based Replication.") These options can be set either in the master server's my.cnf file, or on the command line when starting the master mysqld process. See Section 9.7.6, "Starting Replication (Single Replication Channel)," for information regarding the latter option.

9.7.6. Starting Replication (Single Replication Channel)

This section outlines the procedure for starting MySQL CLuster replication using a single replication channel.

Start the MySQL replication master server by issuing this command:

shell M> mysqld --nbdcluster --server-id=id \
        --log-bin --binlog-format=row &

where id is this server's unique ID (see Section 9.7.2, "Assumptions and General Requirements"). This starts the server's mysqld process with binary logging enabled using the proper logging format.

Start the MySQL replication slave server as shown here:

shell S> mysqld --ndbcluster --server-id=id &

where id is the slave server's unique ID. It is not necessary to enable logging on the replication slave.

Note that you should use the --skip-slave-start option with this command or else you should include skip-slave-start in the slave server's my.cnf file, unless you want replication to begin immediately. With the use of this option, the start of replication is delayed until the appropriate START SLAVE statement has been issued, as explained in Step 4 below.

It is necessary to synchronize the slave server with the master server's replication binlog. If binary logging has not previously been running on the master, run the following statement on the slave:

     -> MASTER_LOG_FILE='',
     -> MASTER_LOG_POS=4;

This instructs the slave to begin reading the master's binary log from the log's starting point. Otherwisethat is, if you are loading data from the master using a backupsee Section 9.7.8, "Implementing Failover with MySQL Cluster," for information on how to obtain the correct values to use for MASTER_LOG_FILE and MASTER_LOG_POS in such cases.

Finally, you must instruct the slave to begin applying replication by issuing this command from the mysql client on the replication slave:


This also initiates the transmission of replication data from the master to the slave.

It is also possible to use two replication channels, in a manner similar to the procedure described in the next section; the differences between this and using a single replication channel are covered in Section 9.7.7, "Using Two Replication Channels."

9.7.7. Using Two Replication Channels

In a more complete example scenario, we envision two replication channels to provide redundancy and thereby guard against possible failure of a single replication channel. This requires a total of four replication servers, two masters for the master cluster and two slave servers for the slave cluster. For purposes of the discussion that follows, we assume that unique identifiers are assigned as shown here:

Server ID



Masterprimary replication channel (M)


Mastersecondary replication channel (M')


Slaveprimary replication channel (S)


Slavesecondary replication channel (S')

Setting up replication with two channels is not radically different from setting up a single replication channel. First, the mysqld processes for the primary and secondary replication masters must be started, followed by those for the primary and secondary slaves. Then the replication processes may be initiated by issuing the START SLAVE statement on each of the slaves. The commands and the order in which they need to be issued are shown here:

Start the primary replication master:

shellM> mysqld --ndbcluster --server-id=1 \
               --log-bin --binlog-format=row &

Start the secondary replication master:

shellM'> mysqld --ndbcluster --server-id=2 \
                --log-bin --binlog-format=row &

Start the primary replication slave server:

shellS> mysqld --ndbcluster --server-id=3 \
               --skip-slave-start &

Start the secondary replication slave:

shellS'> mysqld --ndbcluster --server-id=4 \
                --skip-slave-start &

Finally, commence replication on the primary channel by executing the START SLAVE statement on the primary slave as shown here:


As mentioned previously, it is not necessary to enable binary logging on replication slaves.

9.7.8. Implementing Failover with MySQL Cluster

In the event that the primary Cluster replication process fails, it is possible to switch over to the secondary replication channel. The following procedure describes the steps required to accomplish this:

Obtain the time of the most recent global checkpoint (GCP). That is, you need to determine the most recent epoch from the apply_status table on the slave cluster, which can be found using the following query:

mysqlS'> SELECT @latest:=MAX(epoch)
      ->        FROM cluster_replication.apply_status;

Using the information obtained from the query shown in Step 1, obtain the corresponding records from the binlog_index table on the master cluster as shown here:

mysqlM'> SELECT
      ->     @file:=SUBSTRING_INDEX(File,/', -1),
      ->     @pos:=Position
      -> FROM cluster_replication.binlog_index
      -> WHERE epoch > @latest
      -> ORDER BY epoch ASC LIMIT 1;

These are the records saved on the master since the failure of the primary replication channel. We have employed the user variable @latest here to represent the value obtained in Step 1. Of course, it is not possible for one mysqld instance to access user variables set on another server instance directly. These values must be "plugged in" to the second query manually or in application code.

Now it is possible to synchronize the secondary channel by running the following query on the secondary slave server:

      ->     MASTER_LOG_FILE='@file',
      ->     MASTER_LOG_POS=@pos;

Again we have employed user variables (in this case @file and @pos) to represent the values obtained in Step 2 and applied in Step 3; in practice these values must be inserted manually or using application code that can access both of the servers involved.

Note that @file is a string value such as '/var/log/mysql/replication-master-bin.00001' and so must be quoted when used in SQL or application code. However, the value represented by @pos must not be quoted. Although MySQL normally attempts to convert strings to numbers, this case is an exception.

You can now initiate replication on the secondary channel by issuing the appropriate command on the secondary slave mysqld:

mysql S'> START SLAVE;

Once the secondary replication channel is active, you can investigate the failure of the primary and effect repairs. The precise actions required to do this will depend upon the reasons for which the primary channel failed.

If the failure is limited to a single server, it should (in theory) be possible to replicate from M to S', or from M' to S; however, this has not yet been tested.

9.7.9. MySQL Cluster Backups with Replication

This discussion discusses making backups and restoring from them using MySQL Cluster replication. We assume that the replication servers have already been configured as covered previously (see Section 9.7.5, "Preparing the Cluster for Replication," and the sections immediately following). This having been done, the procedure for making a backup and then restoring from it is as follows:

There are two different methods by which the backup may be started.

  • Method A:

    This method requires that the cluster backup process was previously enabled on the master server, prior to starting the replication process. This can be done by including the line


    in a [MYSQL_CLUSTER] section in the my.cnf file, where management_host is the IP address or hostname of the NDB management server for the master cluster, and port is the management server's port number. Note that the port number needs to be specified only if the default port (1186) is not being used. (See Section 9.3.3, "Multi-Computer Configuration." for more information about ports and port allocation in MySQL Cluster.)

    In this case, the backup can be started by executing this statement on the replication master:

    shell M> ndb_mgm -e "START BACKUP"
  • Method B:

    If the my.cnf file does not specify where to find the management host, you can start the backup process by passing this information to the NDB management client as part of the START BACKUP command, like this:

    shellM> ndb_mgmmanagement_host:port -e "START BACKUP"

    where management_host and port are the hostname and port number of the management server. In our scenario as outlined earlier (see Section 9.7.5, "Preparing the Cluster for Replication,"), this would be executed as follows:

    shellM> ndb_mgm rep-master:1186 -e "START BACKUP"
In either case, it is highly advisable to allow any pending transactions to be completed before beginning the backup, and then not to permit any new transactions to begin during the backup process.

Copy the cluster backup files to the slave that is being brought on line. Each system running an ndbd process for the master cluster will have cluster backup files located on it, and all of these files must be copied to the slave to ensure a successful restore. The backup files can be copied into any directory on the computer where the slave management host resides, so long as the MySQL and NDB binaries have read permissions in that directory. In this case, we will assume that these files have been copied into the directory /var/BACKUPS/BACKUP-1.

It is not necessary that the slave cluster have the same number of ndbd processes (data nodes) as the master; however, it is highly recommended this number be the same. It is necessary that the slave be started with the --skip-slave-start option, to prevent premature startup of the replication process.

Create any databases on the slave cluster that are present on the master cluster that are to be replicated to the slave. Important: A CREATE SCHEMA statement corresponding to each database to be replicated must be executed on each data node in the slave cluster.

Reset the slave cluster using this statement in the MySQL Monitor:


It is important to make sure that the slave's apply_status table does not contain any records prior to running the restore process. You can accomplish this by running this SQL statement on the slave:

mysqlS> DELETE FROM cluster_replication.apply_status;

You can now start the cluster restoration process on the replication slave using the ndb_restore command for each backup file in turn. For the first of these, it is necessary to include the -m option to restore the cluster metadata:

shellS> ndb_restore -c slave_host:port -n node-id \
        -b backup-id -m -r dir

dir is the path to the directory where the backup files have been placed on the replication slave. For the ndb_restore commands corresponding to the remaining backup files, the -m option should not be used.

For restoring from a master cluster with four data nodes (as shown in Figure 9.5) where the backup files have been copied to the directory /var/BACKUPS/BACKUP-1, the proper sequence of commands to be executed on the slave might look like this:

shellS> ndb_restore -c rep-slave:1186 -n 2 -b 1 -m \
        -r ./VAR/BACKUPS/BACKUP-1
shellS> ndb_restore -c rep-slave:1186 -n 3 -b 1 \
        -r ./VAR/BACKUPS/BACKUP-1
shellS> ndb_restore -c rep-slave:1186 -n 4 -b 1 \
        -r ./VAR/BACKUPS/BACKUP-1
shellS> ndb_restore -c rep-slave:1186 -n 5 -b 1 \
        -r ./VAR/BACKUPS/BACKUP-1

This sequence of commands causes the most recent epoch records to be written to the slave's apply_status table.

Next, it is necessary to make all nodes in the slave cluster aware of the new tables. (This is due to the fact that the NDB Cluster storage engine does not currently support autodiscovery of schema changes. See Section, "Initiating Discovery of Schema Changes,") You can accomplish this using these commands:

mysqlS*> USEdb_name;

db_name is the name of the database that was backed up and restored. Where multiple databases have been backed up and then restored, it is necessary to issue the USE and SHOW statements for each database in turn. Note also that these commands must be issued on each host acting as a data node in the slave cluster.

Now you need to obtain the most recent epoch from the binlog_index table on the slave (as discussed in Section 9.7.8, "Implementing Failover with MySQL Cluster"):

mysqlS> SELECT @latest:=MAX(epoch)
     -> FROM cluster_replication.apply_status;

Using @latest as the epoch value obtained in the previous step, you can obtain the correct starting position @pos in the correct binary logfile @file from the master's cluster_replication.binlog_index table using the query shown here:

mysqlM> SELECT
     -> @file:=SUBSTRING_INDEX(File, '/', -1),
     -> @pos:=Position
     -> FROM cluster_replication.binlog_index
     -> WHERE epoch > @latest
     -> ORDER BY epoch ASC LIMIT 1;

Using the values obtained in the previous step, you can now issue the appropriate CHANGE MASTER TO statement in the slave's mysql client:

     ->     MASTER_LOG_FILE='@file',
     ->     MASTER_LOG_POS=@pos;

Now that the slave "knows" from what point in which binlog file to start reading data from the master, you can cause the slave to begin replicating with this standard MySQL statement:


To perform a backup and restore on a second replication channel, it is necessary only to repeat these steps, substituting the hostnames and IDs of the secondary master and slave for those of the primary master and slave replication servers where appropriate, and running the preceding statements on them.

For additional information on performing Cluster backups and restoring Cluster from backups, see Section 9.6.5, "Online Backup of MySQL Cluster." Automating Synchronization of the Slave to the Master Binlog

It is possible to automate much of the process described in the previous section (see Section 9.7.9, "MySQL Cluster Backups with Replication"). The following Perl script serves as an example of how you can do this.

#!/user/bin/perl -w

#  file:

#  Copyright ©2005 MySQL AB

#  This program is free software; you can redistribute it and/or modify
#  it under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.

#  This program is distributed in the hope that it will be useful,
#  but WITHOUT ANY WARRANTY; without even the implied warranty of
#  GNU General Public License for more details.

#  You should have received a copy of the GNU General Public License
#  along with this program; if not, write to:
#  Free Software Foundation, Inc.
#  59 Temple Place, Suite 330
#  Boston, MA 02111-1307 USA
#  Version 1.1

######################## Includes ###############################

use DBI;

######################## Globals ################################

my  $m_host='';
my  $m_port='';
my  $m_user='';
my  $m_pass='';
my  $s_host='';
my  $s_port='';
my  $s_user='';
my  $s_pass='';
my  $dbhM='';
my  $dbhS='';
####################### Sub Prototypes ##########################

sub CollectCommandPromptInfo;
sub ConnectToDatabases;
sub DisconnectFromDatabases;
sub GetSlaveEpoch;
sub GetMasterInfo;
sub UpdateSlave;

######################## Program Main ###########################


################## Collect Command Prompt Info ##################

sub CollectCommandPromptInfo
  ### Check that user has supplied correct number of command line args
  die "Usage:\n
       reset-slave >master MySQL host< >master MySQL port< \n
                   >master user< >master pass< >slave MySQL host< \n
                   >slave MySQL port< >slave user< >slave pass< \n
       All 8 arguments must be passed. Use BLANK for NULL passwords\n"
       unless @ARGV == 8;

  $m_host  =  $ARGV[0];
  $m_port  =  $ARGV[1];
  $m_user  =  $ARGV[2];
  $m_pass  =  $ARGV[3];
  $s_host  =  $ARGV[4];
  $s_port  =  $ARGV[5];
  $s_user  =  $ARGV[6];
  $s_pass  =  $ARGV[7];

  if ($m_pass eq "BLANK") { $m_pass = '';}
  if ($s_pass eq "BLANK") { $s_pass = '';}

###############  Make connections to both databases #############

sub ConnectToDatabases
  ### Connect to both master and slave cluster databases
  ### Connect to master
    = DBI->connect(
    "$m_user", "$m_pass")
      or die "Can't connect to Master Cluster MySQL process!
              Error: $DBI::errstr\n";

  ### Connect to slave
    = DBI->connect(
          "$s_user", "$s_pass")
    or die "Can't connect to Slave Cluster MySQL process!
            Error: $DBI::errstr\n";

################  Disconnect from both databases ################

sub DisconnectFromDatabases
  ### Disconnect from master

  or warn  "Disconnection failed: $DBI::errstr\n";

  ### Disconnect from slave

  or warn  "Disconnection failed: $DBI::errstr\n";

###################### Find the last good GCI ###################

sub GetSlaveEpoch
  $sth = $dbhS->prepare("SELECT MAX(epoch)
                            FROM cluster_replication.apply_status;")
      or die "Error while preparing to select epoch from slave: ",

      or die "Selecting epoch from slave error: ", $sth->errstr;

  $sth->bind_col (1, \$epoch);
  print "\tSlave Epoch =  $epoch\n";

#######  Find the position of the last GCI in the binlog ########

sub GetMasterInfo
  $sth = $dbhM->prepare("SELECT
                           SUBSTRING_INDEX(File, '/', -1), Position
                           FROM cluster_replication.binlog_index
                           WHERE epoch > $epoch
                           ORDER BY epoch ASC LIMIT 1;")
      or die "Prepare to select from master error: ", $dbhM->errstr;

      or die  "Selecting from master error: ", $sth->errstr;

  $sth->bind_col (1, \$binlog);
  $sth->bind_col (2, \$binpos);
  print "\tMaster bin log =  $binlog\n";
  print "\tMaster Bin Log position =  $binpos\n";

##########  Set the slave to process from that location #########

sub UpdateSlave
  $sth = $dbhS->prepare("CHANGE MASTER TO
      or die "Prepare to CHANGE MASTER error: ", $dbhS->errstr;

      or die  "CHNAGE MASTER on slave error: ", $sth->errstr;
  print "\tSlave has been updated. You may now start the slave.\n";

# end Initiating Discovery of Schema Changes

The NDB Cluster storage engine does not at present automatically detect structural changes in databases or tables. When a database or table is created or dropped, or when a table is altered using ALTER TABLE, the cluster must be made aware of the change. When a database is created or dropped, the appropriate CREATE SCHEMA or DROP SCHEMA statement should be issued on each storage node in the cluster to induce discovery of the change, that is:

mysqlS*> CREATE SCHEMA db_name;
mysqlS*> DROP SCHEMA db_name;

Dropping Tables

When dropping a table that uses the NDB Cluster storage engine, it is necessary to allow any unfinished transactions to be completed and then not to begin any new transactions before performing the DROP operation:

Stop performing transactions on the slave.

Drop the table:

mysqlS> DROP TABLE [db_name.]table_name;

Make all slave mysqld processes aware of the drop:

mysqlS*> SHOW TABLES [FROM db_name];

All of the MySQL slave servers can now "see" that the table has been dropped from the database.

Creating Tables

When creating a new table, you should perform the following steps:

Create the table:

mysqlS> CREATE TABLE [db_name.]table_name (
     -> #  column and index definitions...
     -> ) ENGINE=NDB;

Make all SQL nodes in the slave cluster aware of the new table:

mysqlS*> SHOW TABLES [FROM db_name];

You can now start using the table as normal. When creating a new table, note thatunlike the case when dropping tablesit is not necessary to stop performing any transactions beforehand.

Altering Tables

When altering tables, you should perform the following steps in the order shown:

Ensure that all pending transactions have been completed, and do not initiate any new transactions at this time.

Issue any desired ALTER TABLE statements that add or remove columns to or from an existing table. For example:

mysqlS> ALTER TABLE table_name  /* column definition, ... */;

Force all slave SQL nodes to become aware of the changed table definition. The recommended way to do this is by issuing a "throwaway" SHOW TABLES statement on each slave mysqld:


You may now resume normal operations. These include transactions involving records in the changed table.

Note that when you create a new NDB Cluster table on the master cluster, if you do so using the mysqld that acts as the replication master, you must execute a SHOW TABLES, also on the master mysqld, to initiate discovery properly. Otherwise, the new table and any data it contains cannot be seen by the replication master mysqld, nor by the slave (that is, neither the new table nor its data is replicated). If the table is created on a mysqld that is not acting as the replication master, it does not matter which mysqld issues the SHOW TABLES.

It is also possible to force discovery by issuing a "dummy" SELECT statement using the new or altered table in the statement's FROM clause. Although the statement fails, it causes the change to be recognized by the cluster. However, issuing a SHOW TABLES is the preferred method.

We are working to implement automatic discovery of schema changes in a future MySQL Cluster release. For more information about this and other Cluster issues, see Section 9.9, "Known Limitations of MySQL Cluster."

Previous Page
Next Page