Archive for MySQL Cluster

Free Guide released: Scalable Authentication Services with FreeRADIUS and MySQL Cluster

MySQL/Sun have releases a new white paper…

Do you want to ensure that your Authentication, Authorization and Accounting (AAA) infrastructure will scale to support your business growth?
As network use grows and services become more dynamic, limitations can occur which add administrative overhead, inhibit flexible scaling and impact the timely synchronization of data across the AAA environment.
To address these challenges, Sun has collaborated with the FreeRADIUS server team, the most widely deployed RADIUS server in the world, to integrate the carrier-grade, real-time MySQL Cluster database with the FreeRADIUS Server.
Delivering Scalable Authentication Services:
Get the whitepaper now!
Attend Webinar
Download our free whitepaper “Delivering Scalable and Highly Available Authentication, Authorization and Accounting Services” now to better understand:
The concepts of current data storage solutions for AAA environments and their potential limitations as network use grows
How you can implement an infrastructure for high growth and high availability with low complexity by deploying the FreeRADIUS server and MySQL Cluster
How the solution performs in real world AAA environments via a user case study.

Do you want to ensure that your Authentication, Authorization and Accounting (AAA) infrastructure will scale to support your business growth?

As network use grows and services become more dynamic, limitations can occur which add administrative overhead, inhibit flexible scaling and impact the timely synchronization of data across the AAA environment.

To address these challenges, Sun has collaborated with the FreeRADIUS server team, the most widely deployed RADIUS server in the world, to integrate the carrier-grade, real-time MySQL Cluster database with the FreeRADIUS Server.

Download our free whitepaper “Delivering Scalable and Highly Available Authentication, Authorization and Accounting Services” now to better understand:

  • The concepts of current data storage solutions for AAA environments and their potential limitations as network use grows
  • How you can implement an infrastructure for high growth and high availability with low complexity by deploying the FreeRADIUS server and MySQL Cluster
  • How the solution performs in real world AAA environments via a user case study.




MySQL Cluster 6.3.26 binaries released

The binaries for MySQL Cluster 6.3.26 have now been released and can be downloaded from http://dev.mysql.com/downloads/cluster/6.3.html

A description of all of the changes (fixes) that have gone into MySQL Cluster 6.3.26 (compared to 6.3.25) can be found in the MySQL_Cluster_6_3_26_ChangeLog.





MySQL Cluster 7.0.7 binaries released

The binaries for MySQL Cluster 7.0.7 have now been released and can be downloaded from http://dev.mysql.com/downloads/cluster/7.0.html

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.0.7 (compared to 7.0.6) can be found in the MySQL Cluster 7.0.7 Change Log.





MySQL Cluster 7.0.7 source released

Update: As explained in “MySQL Cluster 7.0.7 binaries released” you can now download the compiled binaries for your particular platform. I am going to leave this entry in place as it will hopefully be useful for future releases but for 7.0.7 you should refer to that other post.

You’ll need to wait for the pre-built binaries but you can now download the source code and build it for yourself to get started. This article explains where to get it and how to build, install and test the installation.

In this example, I’ve used Ubuntu.

The first step is to download the compressed tar ball containing the source code by pointing your browser to ftp://ftp.mysql.com/pub/mysql/download/cluster_telco/mysql-5.1.35-ndb-7.0.7/mysql-cluster-gpl-7.0.7.tar.gz

Then you need to extract the code and then build and install the software:

billy@billy-laptop:~/mysql$ tar -xzf mysql-cluster-gpl-7.0.7.tar.gz
billy@billy-laptop:~/mysql$ mv mysql-cluster-gpl-7.0.7 7_0_7
billy@billy-laptop:~/mysql$ cd 7_0_7
billy@billy-laptop:~/mysql/7_0_7$ autoreconf --force --install
billy@billy-laptop:~/mysql/7_0_7$ ./configure --with-plugins=max --prefix=/usr/local/mysql
billy@billy-laptop:~/mysql/7_0_7$ make
billy@billy-laptop:~/mysql/7_0_7$ sudo make install
billy@billy-laptop:~/mysql/7_0_7$ sudo cp storage/ndb/src/kernel/ndbd /usr/local/mysql/bin
billy@billy-laptop:~/mysql/7_0_7$ sudo cp storage/ndb/src/kernel/ndbmtd /usr/local/mysql/bin
billy@billy-laptop:~/mysql/7_0_7$ sudo cp storage/ndb/src/mgmsrv/ndb_mgmd /usr/local/mysql/bin
billy@billy-laptop:~/mysql/7_0_7$ cd /usr/local/mysql/mysql-test
billy@billy-laptop:/usr/local/mysql/mysql-test$ sudo chmod 777 .
billy@billy-laptop:/usr/local/mysql/mysql-test$ sudo chmod 777 *

You can then test that the installation has been successful:

billy@billy-laptop:/usr/local/mysql/mysql-test$ ./mtr ndb_basic

Logging: ./mtr  ndb_basic
090902 18:17:43 [Warning] Forcing shutdown of 2 plugins
MySQL Version 5.1.35
Checking supported features...
- using ndbcluster when necessary, mysqld supports it
- skipping SSL, mysqld not compiled with SSL
- multi threaded ndbd, will be used "round robin"
Collecting tests...
vardir: /usr/local/mysql/mysql-test/var
Removing old var directory...
Creating var directory '/usr/local/mysql/mysql-test/var'...
Installing system database...
Using server port 43993

==============================================================================

TEST                                      RESULT   TIME (ms)
------------------------------------------------------------

worker[1] Using MTR_BUILD_THREAD 250, with reserved ports 12500..12509
ndb.ndb_basic                            [ pass ]  73661
------------------------------------------------------------
The servers were restarted 0 times
Spent 73.661 of 143 seconds executing testcases

All 1 tests were successful.

Following this you can configure and use the software as normal (refer to Creating a simple Cluster on a single LINUX host).

The annotated header information has already been updated in the NDB API Docs section.

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.0.7 (compared to 7.0.6) can be found in the MySQL Cluster 7.0.7 Change Log.





MySQL Cluster: Geographic Replication Deep-Dive

Following requests received during earlier MySQL Cluster webinars, a new (and as always, free) webinar has been scheduled which focuses on MySQL Cluster Replication. The webinar is scheduled for Thursday 10 September and you can register at http://www.mysql.com/news-and-events/web-seminars/display-415.html

I’ll be on-line during the webinar, answering questions.

Details….

MySQL Cluster: Geographic Replication Deep-Dive

Thursday, September 10, 2009

MySQL Cluster has been deployed into some of the most demanding web, telecoms and enterprise / government workloads, supporting 99.999% availability with real time performance and linear write scalability.

Tune into this webinar where you can hear from the Director of MySQL Server Engineering provide a detailed “deep dive” into one of MySQL Cluster’s key capabilities – Geographic Replication.

In this session, you will learn how using Geographic Replication enables your applications to :

  • achieve higher levels of availability within a data center or across a WAN
  • locate data closer to users, providing lower latency access
  • replicate to other MySQL storage engines for complex data analysis and reporting of real time data
  • how to get started with Geographic Replication

WHO:

  • Tomas Ulin, Director, MySQL Server Technologies
  • Matthew Keep, MySQL Cluster Product Management

WHAT:

MySQL Cluster: Geographic Replication Deep-Dive web presentation.

WHEN:

Thursday, September 10, 2009: 09:30 Pacific time (America)

Thu, Sep 10: 06:30 Hawaii time
Thu, Sep 10: 10:30 Mountain time (America)
Thu, Sep 10: 11:30 Central time (America)
Thu, Sep 10: 12:30 Eastern time (America)
Thu, Sep 10: 16:30 UTC
Thu, Sep 10: 17:30 Western European time
Thu, Sep 10: 18:30 Central European time
Thu, Sep 10: 19:30 Eastern European time

The presentation will be approximately 45 minutes long followed by Q&A.

WHERE:

Simply access the web seminar from the comfort of your own office.

WHY:

To learn more about how you can use Geographic Replication in MySQL Cluster 7.0 to build real time, high performance applications delivering continuously available database services.





Using NDB API Events to mask/hide colum data when replicating

If you  have asynchronous replication where the slave database is using MySQL Cluster then you can use the NDB API events functionality to mask/overwrite data. You might do this for example if the replica is to be used for generating reports where some of the data is sensitive and not relevant to those reports. Unlike stored procedures, NDB API events will be triggered on the slave.

The first step is to set up replication (master->slave rather than multi-master) as described in Setting up MySQL Asynchronous Replication for High Availability).

In this example, the following table definition is used:

mysql> use clusterdb;
mysql> create table ASSETS (CODE int not null primary key, VALUE int) engine=ndb;

The following code should be compiled and then executed on a node within the slave Cluster:

#include <NdbApi.hpp>
#include <stdio.h>
#include <iostream>
#include <unistd.h>
#include <cstdlib>
#include <string.h>

#define APIERROR(error) 
  { std::cout << "Error in " << __FILE__ << ", line:" << __LINE__ << ", code:" 
  << error.code << ", msg: " << error.message << "." << std::endl; 
  exit(-1); }

int myCreateEvent(Ndb* myNdb,
const char *eventName,
const char *eventTableName,
const char **eventColumnName,
const int noEventColumnName);

static void do_blank(Ndb*, int);

int main(int argc, char** argv)
{
  if (argc < 1)
 {
    std::cout << "Arguments are <connect_string cluster>.n";
    exit(-1);
  }
  const char *connectstring = argv[1];

  ndb_init();

  Ndb_cluster_connection *cluster_connection=
  new Ndb_cluster_connection(connectstring); // Object representing the cluster

  int r= cluster_connection->connect(5 /* retries               */,
  3 /* delay between retries */,
  1 /* verbose               */);
  if (r > 0)
  {
    std::cout << "Cluster connect failed, possibly resolved with more retries.n";
    exit(-1);
  }
  else if (r < 0)
  {
    std::cout << "Cluster connect failed.n";
    exit(-1);
  }

  if (cluster_connection->wait_until_ready(30,30))
  {
    std::cout << "Cluster was not ready within 30 secs." << std::endl;
    exit(-1);
  }

  Ndb* myNdb= new Ndb(cluster_connection,
                      "clusterdb");  // Object representing the database

  if (myNdb->init() == -1) APIERROR(myNdb->getNdbError());

  const char *eventName= "CHNG_IN_ASSETS";
  const char *eventTableName= "ASSETS";
  const int noEventColumnName= 2;
  const char *eventColumnName[noEventColumnName]=
  {"CODE",
   "VALUE"};

  // Create events
  myCreateEvent(myNdb,
  eventName,
  eventTableName,
  eventColumnName,
  noEventColumnName);

  // Normal values and blobs are unfortunately handled differently..
  typedef union { NdbRecAttr* ra; NdbBlob* bh; } RA_BH;

  int i;

  // Start "transaction" for handling events
  NdbEventOperation* op;
  printf("create EventOperationn");
  if ((op = myNdb->createEventOperation(eventName)) == NULL)
    APIERROR(myNdb->getNdbError());

  printf("get valuesn");
  RA_BH recAttr[noEventColumnName];
  RA_BH recAttrPre[noEventColumnName];

  for (i = 0; i < noEventColumnName; i++) {
    recAttr[i].ra    = op->getValue(eventColumnName[i]);
    recAttrPre[i].ra = op->getPreValue(eventColumnName[i]);
  }

  // set up the callbacks
  // This starts changes to "start flowing"
  if (op->execute())
    APIERROR(op->getNdbError());

  while (true) {
    int r = myNdb->pollEvents(1000); // wait for event or 1000 ms
    if (r > 0) {
      while ((op= myNdb->nextEvent())) {
        NdbRecAttr* ra = recAttr[0].ra;
        if (ra->isNULL() >= 0) { // we have a value
          if (ra->isNULL() == 0) { // we have a non-null value
            printf("CODE: %d ", ra->u_32_value());
            do_blank(myNdb, ra->u_32_value());
          } else 
            printf("%-5s", "NULL");
          } else
            printf("%-5s", "-"); // no value
            ra = recAttr[1].ra;
            printf("n");
          }
        }
      }
    }

int myCreateEvent(Ndb* myNdb,
                  const char *eventName,
                  const char *eventTableName,
                  const char **eventColumnNames,
                  const int noEventColumnNames)
{
  NdbDictionary::Dictionary *myDict= myNdb->getDictionary();
  if (!myDict) APIERROR(myNdb->getNdbError());

  const NdbDictionary::Table *table= myDict->getTable(eventTableName);
  if (!table) APIERROR(myDict->getNdbError());

  NdbDictionary::Event myEvent(eventName, *table);
  myEvent.addTableEvent(NdbDictionary::Event::TE_INSERT);

  myEvent.addEventColumns(noEventColumnNames, eventColumnNames);

  // Add event to database
  if (myDict->createEvent(myEvent) == 0)
    myEvent.print();
  else if (myDict->getNdbError().classification ==
            NdbError::SchemaObjectExists) {
    printf("Event creation failed, event existsn");
    printf("dropping Event...n");
    if (myDict->dropEvent(eventName)) APIERROR(myDict->getNdbError());
    // try again
    // Add event to database
    if ( myDict->createEvent(myEvent)) APIERROR(myDict->getNdbError());
  } else
    APIERROR(myDict->getNdbError());

    return 0;
}

static void do_blank(Ndb* myNdb, int code)
{
  const NdbDictionary::Dictionary* myDict= myNdb->getDictionary();
  const NdbDictionary::Table *myTable= myDict->getTable("ASSETS");

  if (myTable == NULL)
  APIERROR(myDict->getNdbError());

  NdbTransaction *myTransaction= myNdb->startTransaction();
  if (myTransaction == NULL) APIERROR(myNdb->getNdbError());

  printf("Replacing VALUE with 0 for CODE: %d ", code);

  NdbOperation *myOperation= myTransaction->getNdbOperation(myTable);
  if (myOperation == NULL) APIERROR(myTransaction->getNdbError());

  myOperation->updateTuple();
  myOperation->equal("CODE", code);
  myOperation->setValue("VALUE", 0);

  if (myTransaction->execute( NdbTransaction::Commit ) == -1)
    APIERROR(myTransaction->getNdbError());

  myNdb->closeTransaction(myTransaction);
}

shell> slave_filter 127.0.0.1:1186

From the master Cluster, insert some values (note that the example can easily be extended to cover updates too):

mysql> insert into ASSETS values (101, 50),(102, 40), (103, 99);

and then check that on the slave the value has been set to 0 for each of the entries:

mysql> select * from ASSETS;
+------+-------+
| CODE | VALUE |
+------+-------+
|  100 |     0 |
|  103 |     0 |
|  101 |     0 |
|  102 |     0 |
+------+-------+

How this works…. The table data is replicated as normal and the real values are stored in the slave. The “slave_filter” process has registered against insert operations on this table and when it’s triggered it sets the VALUE field to 0. The event is processes asynchronously from the replication and so there will be some very narrow window during which the true values would be stored in the slave.





Setting up MySQL Asynchronous Replication for High Availability

Asynchronous Replication for High Availability

Asynchronous Replication for High Availability

MySQL replication is often positioned as a solution to provide extra throughput for your database (especially when dealing with high numbers of read operations). What tends to be overlooked is how it can be used to provide high availability (HA) – no matter how much redundancy you have at a local level, your system remains at risk from a single catastrophic failure – loss of power, earthquake, terrorist attack etc. By using MySQL asynchronous replication to maintain a geographically remote copy of that data, service can be maintained through any single site failure.

As this replication is asynchronous, there are no low latency demands on the network and so the two sites can be thousands of miles apart while utilising low-cost connections.

This article provides a simple example of how to set up asynchronous replication between 2 MySQL databases in order to provide a Highly Available solution. First of all, it will be 2 databases where the tables will be stored in the MyISAM storage engine and then between 2 MySQL Cluster databases where I also configure it for Multi-master replication where changes can be made at either site.

Subsequent articles will build on this to show:

  • Collision detection and resolution when using MySQL Cluster multi-master asynchronous replication
  • Introducing asynchronous replication to a running MySQL Cluster database

Throughout this article, two machines are used: ws1 (192.168.0.3) and ws2 (192.168.0.4). ws1 will be set up as the master and ws2 as the slave (in the multi-master configuration, both act as both master and slave).

Setting up replication for non-Cluster databases

Replication is performed from one MySQL Server to another; the master makes the changes available and then one or more other Servers pick up those changes and apply them to their own databases. In this example, both databases will store the table data using the same storage engine (MyISAM) but it’s possible to mix and match (for example, take a look at  MySQL Cluster – flexibility of replication). As this is intended as a simple introduction to replication, I’m keeping life simple by assuming that this is all being set up before the database goes into production – if that isn’t the case for you and you need to cope with existing data then check out the MySQL documentation or subsequent articles on this site.

The my.cnf files can be set up as normal but the one for the MySQL Server that will act as the Master needs to have binary-logging enabled. Also, each of the server needs to have a unique server-id. Here are the my.cnf files used for this example:

my.cnf (Master)

[mysqld]
datadir=/home/billy/mysql/myisam/data
basedir=/usr/local/mysql
port=3306
server-id=1
log-bin

my.cnf (Slave)

[mysqld]
datadir=/home/billy/mysql/myisam/data
basedir=/usr/local/mysql
port=3306
server-id=2

Fire up the Master MySQL Server:

[billy@ws1 myisam]$ mysqld --defaults-file=my.cnf&

The slave needs a userid/password in order to access the master server – best practice is to create a dedicated user with just the required privileges:

[billy@ws1 myisam]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 1
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> grant replication slave on *.* to 'repl'@'192.168.0.4'
 -> identified by 'repl';
Query OK, 0 rows affected (0.00 sec)

The slave can now be told to start processing the replication data that will be staged by the master server:

[billy@ws2 myisam]$ mysqld --defaults-file=my.cnf&
[billy@ws2 myisam]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 2
Server version: 5.1.34-ndb-7.0.6-cluster-gpl MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> CHANGE MASTER TO
-> MASTER_HOST='192.168.0.3',
-> MASTER_PORT=3306,
-> MASTER_USER='repl',
-> MASTER_PASSWORD='repl',
-> MASTER_LOG_FILE='',
-> MASTER_LOG_POS=4;
090803 15:48:09 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=ws2-relay-bin' to avoid this problem.
Query OK, 0 rows affected (0.01 sec)
mysql> start slave;
090803 15:51:24 [Note] Slave SQL thread initialized, starting replication in log 'FIRST' at position 0, relay log './ws2-relay-bin.000001' position: 4
Query OK, 0 rows affected (0.00 sec)
090803 15:51:24 [Note] Slave I/O thread: connected to master 'repl@192.168.0.3:3306',replication started in log 'FIRST' at position 4

Now to test that replication is up and running, create a table on the master, add some data and then check that the table and data can be read from the slave:

[billy@ws1 myisam]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 3
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> create table numbers (num1 int, num2 int);
Query OK, 0 rows affected (0.00 sec)

mysql> insert into numbers values (1,10),(2,20),(3,30);
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0
[billy@ws2 myisam]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 5
Server version: 5.1.34-ndb-7.0.6-cluster-gpl MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from numbers;
+------+------+
| num1 | num2 |
+------+------+
|    1 |   10 |
|    2 |   20 |
|    3 |   30 |
+------+------+
3 rows in set (0.00 sec)

Multi-Master Replication with MySQL Cluster

Multi-Master Replication for HA with MySQL Cluster

Multi-Master Replication for HA with MySQL Cluster

There are a few asynchronous replication capabilities that are unique to MySQL Cluster – one of those is that changes are replicated even if they are made directly to the data nodes using the NDB API, another is that replication can be performed in both directions i.e. multi-master. One of the advantages of this is you can share both read and write operations across both sites so that no capacity is wasted – it also gives you the confidence that either site is sane and ready to take over for the other at any point. You should aim to minimise how often the same rows are modified at the same time on both sites – conflict detection/resolution has been implemented but it can only roll-back the rows changes that conflict with other row changes rather than the full transaction.

It’s important to note that in this article, I’m not talking about the synchronous replication that takes place between data nodes within a single MySQL Cluster site (that happens in parallel and is orthogonal to the asynchronous replication to a remote site).

When performing multi-master asynchronous replication between 2 Clusters, 1 (or more) MySQL Servers in each Cluster is nominated as a master and 1 or more as slaves (it can be the same server that takes on both roles and you can have multiple channels set up in case a channel or MySQL Server is lost). Changes made to the data at either site through any of their MySQL Servers (or directly to the data nodes using the NDB API) will be replicated to the other site.

I will focus on setting up the replication, you can refer to Deploying MySQL Cluster over multiple hosts for the steps to configure and run each MySQL Cluster site.

Most of the steps are very similar to those in the first example – the main differences would come when introducing asynchronous replication to  a MySQL Cluster instance that already contains data and is up and running (processing updates) which will be covered in a subsequent article.

Binary logging needs to be enabled on the MySQL Server(s) at each site that will act as a replication master:

my1.cnf (Master)

[mysqld]
ndb-nodeid=4
ndbcluster
datadir=/home/billy/mysql/replication/7_0_6/data
basedir=/home/billy/mysql/replication/7_0_6
port=3306
server-id=1
log-bin

my1.cnf (Slave)

[mysqld]
ndb-nodeid=4
ndbcluster
datadir=/home/billy/mysql/replication/7_0_6/data
basedir=/home/billy/mysql/replication/7_0_6
port=3306
server-id=2
log-bin

The MySQL Cluster nodes (including the MySQL Servers (mysqld) from both sites should be started up as normal.

In this case, the replication users should be set up for both Clusters:


[billy@ws1 7_0_6]$ mysqld --defaults-file=my.cnf&
[billy@ws1 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.

Your MySQL connection id is 2

Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> grant replication slave on *.* to 'repl'@'192.168.0.4'
-> identified by 'repl';

Query OK, 0 rows affected (0.01 sec)

[billy@ws2 7_0_6]$ mysqld --defaults-file=my.cnf&
[billy@ws2 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.

Your MySQL connection id is 3

Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> grant replication slave on *.* to 'repl'@'192.168.0.3'
-> identified by 'repl';

Query OK, 0 rows affected (0.00 sec)

Replication can then be setup and started on each of the MySQL Servers (those acting as slaves for each Cluster):

[billy@ws1 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 3
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> CHANGE MASTER TO
 -> MASTER_HOST='192.168.0.4',
 -> MASTER_PORT=3306,
 -> MASTER_USER='repl',
 -> MASTER_PASSWORD='repl',
 -> MASTER_LOG_FILE='',
 -> MASTER_LOG_POS=4;
090803 17:25:00 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=ws1-relay-bin' to avoid this problem.
Query OK, 0 rows affected (0.01 sec)
mysql> start slave;
090803 17:27:20 [Note] Slave SQL thread initialized, starting replication in log 'FIRST' at position 0, relay log './ws1-relay-bin.000001' position: 4
Query OK, 0 rows affected (0.02 sec)
090803 17:27:20 [Note] Slave I/O thread: connected to master 'repl@192.168.0.4:3306',replication started in log 'FIRST' at position 4

[billy@ws2 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 4
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> CHANGE MASTER TO
 -> MASTER_HOST='192.168.0.3',
 -> MASTER_PORT=3306,
 -> MASTER_USER='repl',
 -> MASTER_PASSWORD='repl',
 -> MASTER_LOG_FILE='',
 -> MASTER_LOG_POS=4;
090803 17:25:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=ws2-relay-bin' to avoid this problem.
Query OK, 0 rows affected (0.01 sec)
mysql> start slave;
090803 17:27:25 [Note] Slave SQL thread initialized, starting replication in log 'FIRST' at position 0, relay log './ws2-relay-bin.000001' position: 4
Query OK, 0 rows affected (0.00 sec)
090803 17:27:25 [Note] Slave I/O thread: connected to master 'repl@192.168.0.3:3306',replication started in log 'FIRST' at position 4

This time, to make sure that replication is working in both directions, I make changes to both Clusters and then check that they appear at the other site:

[billy@ws1 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 8
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test;
Database changed
mysql> create table numbers (num1 int, num2 int) engine=ndb;
090803 17:32:10 [Note] NDB Binlog: CREATE TABLE Event: REPL$test/numbers
090803 17:32:10 [Note] NDB Binlog: logging ./test/numbers (UPDATED,USE_WRITE)
Query OK, 0 rows affected (0.62 sec)

mysql> insert into numbers values (1,10),(2,20),(3,30);
Query OK, 3 rows affected (0.04 sec)
Records: 3  Duplicates: 0  Warnings: 0
billy@ws2 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 8
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from numbers;
+------+------+
| num1 | num2 |
+------+------+
|    2 |   20 |
|    3 |   30 |
|    1 |   10 |
+------+------+
3 rows in set (0.01 sec)

mysql> insert into numbers values (4,40),(5,50),(6,60);
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0
[billy@ws1 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 9
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from numbers;
+------+------+
| num1 | num2 |
+------+------+
|    1 |   10 |
|    6 |   60 |
|    3 |   30 |
|    5 |   50 |
|    2 |   20 |
|    4 |   40 |
+------+------+
6 rows in set (0.02 sec)




Webinar: MySQL Cluster 7.0: What’s New? (AS/ANZ/IN)

Mat Keep and I will be presenting a Webinar on Wednesday, August 12, 2009 describing the new capabilities of MySQL Cluster 7.0. The time is designed to be Asia/Australasia-friendly but of course it’s open to everyone.

Register for the MySQL Cluster 7 Webinar here.

By attending this webinar, you will learn more about the next generation of MySQL Cluster, and how it enables you to deliver 99.999% database availability, with real time performance and linear scalability, while slashing TCO.

Boasting a range of breakthrough capabilities, MySQL Cluster 7.0 will enable you to stay ahead of your most demanding, mission-critical application requirements. Enhanced scalability delivers higher database performance with fewer nodes to simplify deployment and administration. Expanded platform support and interoperability delivers more choice in both the development and deployment of MySQL Cluster. Simplified cluster monitoring tools reduce DBA administration overhead and operational costs.

Wednesday, August 12, 2009: 14:00 Singapore/Malaysia/Philippines time

Wed, Aug 12: 11:30 India, Sri Lanka
Wed, Aug 12: 13:00 Thailand/Vietnam/Western Indonesia time
Wed, Aug 12: 14:00 Western Australia
Wed, Aug 12: 15:30 Northern Territory (Australia)
Wed, Aug 12: 15:30 South Australia
Wed, Aug 12: 16:00 Queensland (Australia)
Wed, Aug 12: 16:00 NSW, ACT, Victoria, Tasmania (Australia)
Wed, Aug 12: 18:00 New Zealand time




MySQL Cluster Data Node restart times

Restart times have been reduced in MySQL Cluster 6.3.28a & 7.0.9a – refer to that article for the new timings: http://www.clusterdb.com/mysql-cluster/mysql-cluster-restarts-get-faster/

Restarts are required for certain, infrequent maintenance activities. Note that there is no loss of service while a single node restarts.

When a data node restarts, it first attempts to load the data into memory from the local log files and then it will catch up with any subsequent changes by retrieveing them from the surviving node(s) in its node group.

 Based on this, you would expect the time taken to restart a data node to be influenced by:

  1. The amount of data that was stored on the data node before the restart
  2. Rate of updates being made to the data during the restart
  3. Network performance (assuming the data is being updated during recovery)

The times will also be influenced bycertain configuration parameters, performance of the host machine and whether the multi-threaded data node (ndbmtd) is being used.

To provide some insight into how these factors impact restart times, tests have been performed where the following factors are varied:

  • Database size (Each Gbyte is made up of 1,000,000 tuples in each of 5 tables)
  • Whether traffic is running or not (a single thread using the NDB API to send in up to 2K tps (10K updates/second))
  • Whether the 2 data nodes in the node group are on the same host or separated by a Gbit Ethernet network

The following factors are kept constant:

  • Physical hosts: Intel Core 2 Quad Q8200@2.33 GHz; 7.7 GBytes RAM
  • NoOfFragmentLogFiles: 300
  • MaxNoOfExecutionThreads=4

Here are the observed results:

Data Node restart times

Data Node restart times

There are a couple of things to note from these results:

  • Using the multi-threaded data node (ndbmtd) greatly improves the restart time (in this case, 4 threads were available, improvements could be even greater on an 8 core/thread system)
  • Results become less predictable when heavy update traffic is being processed (in this case, up to 10,000 updated rows/second on a single node group). In the tests, no attempt was made to regulate this traffic and the test application was run on the same host as the one of the data nodes. Changes to the rate of updates will vary how long it takes for the restarting node to catch-up as it’s a moving target.

There is another recovery/restart scenario. The measurements shown above assumed that the file system on the data node’s host was intact and could be used to recover the in-memory copy – if that were not the case (or the data nodes were restarted with the “initial” option) then all of the data would have to be recovered from the surviving data node(s) in the same node group. As a comparison restarting a 6 Gbyte data node with the “initial” option took 20 minutes compared to 8 minutes without it (ndbmtd, over Gbit n/w).





Doxygen output for MySQL Cluster NDB API & MGM API

NDB API Documentation

NDB API Documentation


A new page has been added to this site: NDB API Docs which presents the information from the header files for both the NDB API and the NDB Management API.

The material has been generated using doxygen and will be refreshed shortly after any new major, minor or maintenance release is made generally available (starting from MySQL Cluster 7.0.6).