Tag Archive for MySQL Replication

MySQL 5.6 Replication Enhancements – webinar replay

Global Transaction IDs - simplifying replication management

The replay has now been released for the MySQL 5.6 replication enhancements replay where you can get the latest information on all of the great new content that has been included in the MySQL 5.6 Development Releases as well as some features that are still being developed. You can view the replay here.

Some of the topics discussed are:

  • Enhanced data integrity: Global Transactions Identifiers, Crash-Safe Slaves and Replication Event Checksums;
  • High performance: Multi-Threaded Slaves, Binlog Group Commit and Optimized Row-Based Replication;
  • Improved flexibility: Time Delayed Replication, Multi-Source Replication, Binlog API and Informational Log Events




What’s new in MySQL 5.6 Replication – free webinar

Global Transaction IDs - simplifying replication management

There will be a webinar this Wednesday where you can get the latest information on all of the great new content that has been included in the MySQL 5.6 Development Releases as well as some features that are still being developed. As always, the webinar is free but you need to register here in advance. Even if you can’t attend the live event it’s worth registering so that you get sent the replay.

Some of the topics we’ll be discussing are:

  • Enhanced data integrity: Global Transactions Identifiers, Crash-Safe Slaves and Replication Event Checksums;
  • High performance: Multi-Threaded Slaves, Binlog Group Commit and Optimized Row-Based Replication;
  • Improved flexibility: Time Delayed Replication, Multi-Source Replication, Binlog API and Informational Log Events
The event starts on Wednesday, December 14, 2011: 09:00 Pacific time; 17:00 UK; 18:00 CET.




MySQL HA Solutions – webinar replay

If you were unable to attend the live webinar (or you want to go back and listen to it again) then it’s now available to view on-line here.

Databases are the center of today’s web and enterprise applications, storing and protecting an organization’s most valuable assets and supporting business-critical applications. Just minutes of downtime can result in significant lost revenue and dissatisfied customers. Ensuring database highly availability is therefore a top priority for any organization. Tune into this webcast to learn more.

The session discusses:

  1. Causes, effect and impact of downtime
  2. Methodologies to map applications to HA solution
  3. Overview of MySQL HA solutions
  4. Operational best practices to ensure business continuity




Enhanced conflict resolution with MySQL Cluster active-active replication

Detecting conflicts

Part of the latest MySQL Cluster Development Milestone Release (MySQL Cluster 7.2.1 – select the “Development Release” tab at http://dev.mysql.com/downloads/cluster/#downloads) is a couple of enhancements to the conflict detection and resolution mechanism for active-active (multi-master) replication. While MySQL Cluster has had conflict detection for years it has now been made much more complete and a lot easier to use:

  • No changes needed to the application schema
  • Entire conflicting transaction is rolled back together with any dependent transactions

The focus of this post will be to step through how to use this feature – while it will also attempt to explain how it works at a high level, you should refer to the following posts for the design details and philosophy: Eventual consistency with MySQL & Eventual Consistency – detecting conflicts.

What is a conflict?

MySQL Cluster allows bi-directional replication between two (or more) clusters. Replication within each cluster is synchronous but between clusters it is asynchronous which means the following scenario is possible:

Conflict with asynchronous replication
Site A Replication Site B
x == 10 x == 10
x = 11 x = 20
— x=11 –> x == 11
x==20 <– x=20 —

 

In this example a value (column for a row in a table) is set to 11 on site A and the change is queued for replication to site B. In the mean time, an application sets the value to 20 on site B and that change is queued for replication to site A. Once both sites have received and applied the replicated change from the other cluster site A contains the value 20 while site B contains 11 – in other words the databases are now inconsistent.

How MySQL Cluster 7.2 implements eventual consistency

There are two phases to establishing consistency between both clusters after an inconsistency has been introduced:

  1. Detect that a conflict has happened
  2. Resolve the inconsistency

Detecting the conflict

The following animation illustrates how MySQL Cluster 7.2 detects that an inconsistency has been introduced by the asynchronous, active-active replication:

Detecting conflicts

While we typically consider the 2 clusters in an active-active replication configuration to be peers, in this case we designate one to be the primary and the other the secondary. Reads and writes can still be sent to either cluster but it is the responsibility of the primary to identify that a conflict has arisen and then remove the inconsistency.

A logical clock is used to identify (in relative terms) when a change is made on the primary – for those who know something of the MySQL Cluster internals, we use the index of the Global Checkpoint that the update is contained in. For all tables that have this feature turned on, an extra, hidden column is automatically added on the primary – this represents the value of the logical clock when the change was made.

Once the change has been applied on the primary, there is a “window of conflict” for the effected row(s) during which if a different change is made to the same row(s) on the secondary then there will be an inconsistency. Once the slave on the secondary has applied the change from the primary, it will send a replication event back to the slave on the primary, containing the primary’s clock value associated with the changes that have just been applied on the secondary. (Remember that the clock is actually the Global Checkpoint Index and so this feature is sometimes referred to as Reflected GCI). Once the slave on the primary has received this event, it knows that all changes tagged with a clock value no later than the reflected GCI are now safe – the window of conflict has closed.

If an application modifies this same row on the secondary before the replication event from the primary was applied then it will send an associated replication event to the slave on the primary before it reflects the new GCI. The slave on the primary will process this replication event and compare the clock value recorded with the effected rows with the latest reflected GCI; as the clock value for the conflicting row is higher the primary recognises that a conflict has occured and will launch the algorithm to resolve the inconsistency.

Resolving the inconsistency

In earlier releases of MySQL Cluster (or if choosing to use the original algorithm in MySQL Cluster 7.2) you had a choice of simply flagging the primary key of the conflicting rows or backing out one of the changes to the conflicting rows. Using the new NDB$EPOCH_TRANS function, the primary will overwrite the data in the secondary for the effected row(s) and any other rows that were updated in the same transaction (even if they are in tables for which conflict detection has not been enabled).

In fact the algorithm goes a step further and if there were subsequent transactions on the secondary that wrote to the conflicting rows then all of the changes from those dependent transactions on the secondary will be backed-out as well.

Worked example

In this section, we step through how to setup the active-active replication, with the new conflict detection/resolution feature enabled and then test it out by manually introducing some conflicting transations.

Set-up MySQL Clusters and basic active-acative replication

Hosts used for active-active replication tests

Hosts used for replication

To keep things simple, just two hosts are used; “black” will contain all nodes for the primary cluster and “blue” will contain all nodes for the secondary. As an extra simplification a single MySQL Server in each cluster will act as both the master and the slave.

This post will quickly show the configuration files and steps to get the 2 clusters up and running but for a better understanding of these steps you can refer to Deploying MySQL Cluster over multiple hosts.

config.ini (black):

[ndb_mgmd]
hostname=localhost
datadir=/home/billy/my_cluster/data
nodeid=1

[ndbd default]
noofreplicas=2
datadir=/home/billy/my_cluster/data

[ndbd]
hostname=localhost
nodeid=3

[ndbd]
hostname=localhost
nodeid=4

[mysqld]
nodeid=50

config.ini (blue):

[ndb_mgmd]
hostname=localhost
datadir=/home/billy/my_cluster/data
nodeid=1

[ndbd default]
noofreplicas=2
datadir=/home/billy/my_cluster/data

[ndbd]
hostname=localhost
nodeid=3

[ndbd]
hostname=localhost
nodeid=4

[mysqld]
nodeid=50

my.cnf for primary cluster (black):

[mysqld]
ndbcluster
datadir=/home/billy/my_cluster/data
server-id=8
log-bin=black-bin.log 
ndb-log-transaction-id=1
binlog-format=ROW
ndb-log-update-as-write=0

my.cnf for secondary cluster (blue):

[mysqld]
ndbcluster
datadir=/home/billy/my_cluster/data
server-id=9
log-bin=blue-bin.log
ndb-log-transaction-id=1
binlog-format=ROW
ndb-log-update-as-write=0
ndb-log-apply-status=1

Note that the options set in the my.cnf file are very important – if any of these are missing then things will not work as expected.

Start up primary cluster (black):

billy@black:~/my_cluster$ ndb_mgmd --initial
   -f conf/config.ini --configdir=/home/billy/my_cluster/conf/
billy@black:~/my_cluster$ ndbd --initial
billy@black:~/my_cluster$ ndbd --initial
billy@black:~/my_cluster$ ndb_mgm -e show # wait for ndbds to finish starting
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=3    @127.0.0.1  (mysql-5.5.15 ndb-7.2.1, Nodegroup: 0, Master)
id=4    @127.0.0.1  (mysql-5.5.15 ndb-7.2.1, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @127.0.0.1  (mysql-5.5.15 ndb-7.2.1)

[mysqld(API)]   3 node(s)
id=50 (not connected, accepting connect from any host)

billy@black:~/my_cluster$ mysqld --defaults-file=conf/my.cnf &

Start up secondary cluster (blue):

billy@blue:~/my_cluster$ ndb_mgmd --initial
   -f conf/config.ini --configdir=/home/billy/my_cluster/conf/
billy@blue:~/my_cluster$ ndbd --initial
billy@blue:~/my_cluster$ ndbd --initial
billy@blue:~/my_cluster$ ndb_mgm -e show # wait for ndbds to finish starting
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=3    @127.0.0.1  (mysql-5.5.15 ndb-7.2.1, Nodegroup: 0, Master)
id=4    @127.0.0.1  (mysql-5.5.15 ndb-7.2.1, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @127.0.0.1  (mysql-5.5.15 ndb-7.2.1)

[mysqld(API)]   3 node(s)
id=50 (not connected, accepting connect from any host)

billy@blue:~/my_cluster$ mysqld --defaults-file=conf/my.cnf &

Both clusters are now running and replication can be activated for both sites:

billy@black:~/my_cluster$ mysql -u root --prompt="black-mysql> "
black-mysql> CREATE USER repl_user@192.168.1.16;
black-mysql> GRANT REPLICATION SLAVE ON *.* TO repl_user@192.168.1.16
                 IDENTIFIED BY 'billy';
billy@blue:~/my_cluster$ mysql -u root --prompt="blue-mysql> "
blue-mysql> CREATE USER repl_user@192.168.1.20;
blue-mysql> GRANT REPLICATION SLAVE ON *.* TO repl_user@192.168.1.20
                 IDENTIFIED BY 'billy';
blue-mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.20',
    -> MASTER_USER='repl_user',
    -> MASTER_PASSWORD='billy',
    -> MASTER_LOG_FILE='',
    -> MASTER_LOG_POS=4;
blue-mysql> START SLAVE;
black-mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.16',
    -> MASTER_USER='repl_user',
    -> MASTER_PASSWORD='billy',
    -> MASTER_LOG_FILE='',
    -> MASTER_LOG_POS=4;
black-mysql> START SLAVE;

Set up enhanced conflict detection & resolution

The first step is to identify the tables that need conflict detection enabling. Each of those tables then has to have an entry in the mysql.ndb_replication table where they’re tagged as using the new NDB$EPOCH_TRANS() function – you could also choose to use NDB$EPOCH(), in which case only the changes to conflicting rows will be backed-out rather than the full transactions. A few things to note:

  • This must be done before creating the application tables themselves
  • Should only be done on the primary
  • By default the table doesn’t exist and so the very first step is to create it
black-mysql> CREATE TABLE mysql.ndb_replication  (
    ->     db VARBINARY(63),
    ->     table_name VARBINARY(63),
    ->     server_id INT UNSIGNED,
    ->     binlog_type INT UNSIGNED,
    ->     conflict_fn VARBINARY(128),
    ->     PRIMARY KEY USING HASH (db, table_name, server_id)
    -> )   ENGINE=NDB
    -> PARTITION BY KEY(db,table_name);
black-mysql> REPLACE INTO mysql.ndb_replication VALUES ('clusterdb', 'simple1', 8, 0,
'NDB$EPOCH_TRANS()');
black-mysql> REPLACE INTO mysql.ndb_replication VALUES ('clusterdb', 'simple2', 8, 0,
'NDB$EPOCH_TRANS()');
black-mysql> REPLACE INTO mysql.ndb_replication VALUES ('clusterdb', 'simple3', 8, 0,
'NDB$EPOCH_TRANS()');

For each of these tables you should also create an exceptions table which will record any conflicts that have resulted in changes being rolled back; the format of these tables is rigidly defined and so take care to copy the types exactly; again this only needs doing on the primary:

black-mysql> CREATE DATABASE clusterdb;USE clusterdb;
black-mysql> CREATE TABLE simple1$EX (server_id INT UNSIGNED,
               master_server_id INT UNSIGNED, master_epoch BIGINT UNSIGNED,
               count INT UNSIGNED, id INT NOT NULL, PRIMARY KEY(server_id,
               master_server_id, master_epoch, count)) ENGINE=NDB;
black-mysql> CREATE TABLE simple2$EX (server_id INT UNSIGNED,
               master_server_id INT UNSIGNED, master_epoch BIGINT UNSIGNED,
               count INT UNSIGNED, id INT NOT NULL, PRIMARY KEY(server_id,
               master_server_id, master_epoch, count)) ENGINE=NDB;
black-mysql> CREATE TABLE simple3$EX (server_id INT UNSIGNED,
               master_server_id INT UNSIGNED, master_epoch BIGINT UNSIGNED,
               count INT UNSIGNED, id INT NOT NULL, PRIMARY KEY(server_id,
               master_server_id, master_epoch, count)) ENGINE=NDB;

Finally, the application tables themselves can be created (this only needs doing on the primary as they’ll be replicated to the secondary):

black-mysql> CREATE TABLE simple1 (id INT NOT NULL PRIMARY KEY, value INT) ENGINE=ndb;
black-mysql> CREATE TABLE simple2 (id INT NOT NULL PRIMARY KEY, value INT) ENGINE=ndb;
black-mysql> CREATE TABLE simple3 (id INT NOT NULL PRIMARY KEY, value INT) ENGINE=ndb;

Everything is now set up and the new configuration can be tested to ensure that conflicts are detected and the correct updates are rolled back.

Testing enhanced active-active replication and conflict detection

The first step is to add some data to our new tables (note that at this point replication is running and so they only need to be created on the primary) and then update 1 row to make sure that it is replicated to the secondary:

black-mysql> REPLACE INTO simple1 VALUES (1,10);
black-mysql> REPLACE INTO simple2 VALUES (1,10);
black-mysql> REPLACE INTO simple3 VALUES (1,10);
black-mysql> UPDATE simple1 SET value=12 WHERE id=1;
blue-mysql> USE clusterdb;
blue-mysql> SELECT * FROM simple1;
+----+-------+
| id | value |
+----+-------+
|  1 |    12 |
+----+-------+

It is important that the NDB$EPOCH_TRANS() function rolls back any transactions on the secondary that involve a conflict (as well as subsequent, dependent transactions that modify the same rows); to do this manually the simplest approach is to stop the slave IO thread on the secondary thread in order to increase the size of the window of conflict (which is otherwise very short). Once the slave IO thread has been stopped a change is made to table simple1 on the primary and then the secondary makes a (conflicting) change to the same row as well as making a change to table simple2 in the same transaction. A second transaction on the primary will change a row in simple3 – as it doesn’t touch any rows that have been involved in a conflict then that change should stand.

blue-mysql> STOP SLAVE IO_THREAD;
black-mysql> UPDATE simple1 SET value=13 WHERE id=1;
blue-mysql> BEGIN; # conflicting transaction
blue-mysql> UPDATE simple1 SET value=20 WHERE id=1;
blue-mysql> UPDATE simple2 SET value=20 WHERE id=1;
blue-mysql> COMMIT;
blue-mysql> UPDATE simple3 SET value=20 WHERE id=1; # non conflicting
blue-mysql> SELECT * FROM simple1;
+----+-------+
| id | value |
+----+-------+
|  1 |    20 |
+----+-------+
blue-mysql> SELECT * FROM simple2;
+----+-------+
| id | value |
+----+-------+
|  1 |    20 |
+----+-------+
blue-mysql> SELECT * FROM simple3;
+----+-------+
| id | value |
+----+-------+
|  1 |    20 |
+----+-------+

If you now check the exception tables then you can see that the primary (black) has received the changes from the secondary (blue) and because the first transaction updated the same row in simple1 during its window of conflict it has recorded that the change needs to be rolled back – this will happen as soon as the replication thread is restarted on the secondary:

black-mysql> SELECT * FROM simple1$EX;
+-----------+------------------+---------------+-------+----+
| server_id | master_server_id | master_epoch  | count | id |
+-----------+------------------+---------------+-------+----+
|         8 |                9 | 1494648619009 |     3 |  1 |
+-----------+------------------+---------------+-------+----+

black-mysql> SELECT * FROM simple2$EX;
+-----------+------------------+---------------+-------+----+
| server_id | master_server_id | master_epoch  | count | id |
+-----------+------------------+---------------+-------+----+
|         8 |                9 | 1494648619009 |     1 |  1 |
+-----------+------------------+---------------+-------+----+

black-mysql> SELECT * FROM simple3$EX;
Empty set (0.05 sec)
blue-mysql> START SLAVE IO_THREAD;
blue-mysql> SELECT * FROM simple1;
+----+-------+
| id | value |
+----+-------+
|  1 |    13 |
+----+-------+

blue-mysql> SELECT * FROM simple2;
+----+-------+
| id | value |
+----+-------+
|  1 |    10 |
+----+-------+

blue-mysql> SELECT * FROM simple3;
+----+-------+
| id | value |
+----+-------+
|  1 |    20 |
+----+-------+

These are the results we expect – simple1 has the value set by the primary with the subsequent change on the secondary rolled back; simple2 was not updated by the primary but the change on the secondary was rolled back as it was made in the same transaction as the conflicting update to simple1. The change on the secondary to simple3 has survived as it was made outside of any conflicting transaction and the change was not dependent on any conflicting changes. Finally just confirm that the data is identical on the primary:

black-mysql> SELECT * FROM simple1;
+----+-------+
| id | value |
+----+-------+
|  1 |    13 |
+----+-------+

black-mysql> SELECT * FROM simple2;
+----+-------+
| id | value |
+----+-------+
|  1 |    10 |
+----+-------+

black-mysql> SELECT * FROM simple3;
+----+-------+
| id | value |
+----+-------+
|  1 |    20 |
+----+-------+

Statistics are provided on the primary that record that 1 conflict has been detected, effecting 1 transaction and that it resulted in 2 row changes being rolled back:

black-mysql> SHOW STATUS LIKE 'ndb_conflict%';
+------------------------------------------+-------+
| Variable_name                            | Value |
+------------------------------------------+-------+
| Ndb_conflict_fn_max                      | 0     |
| Ndb_conflict_fn_old                      | 0     |
| Ndb_conflict_fn_max_del_win              | 0     |
| Ndb_conflict_fn_epoch                    | 0     |
| Ndb_conflict_fn_epoch_trans              | 1     |
| Ndb_conflict_trans_row_conflict_count    | 1     |
| Ndb_conflict_trans_row_reject_count      | 2     |
| Ndb_conflict_trans_reject_count          | 1     |
| Ndb_conflict_trans_detect_iter_count     | 1     |
| Ndb_conflict_trans_conflict_commit_count | 1     |
+------------------------------------------+-------+

We’re anxious to get feedback on this feature and so please go ahead and download MySQL Cluster 7.2.1 and let us know how you get on through the comments for this post.





London 18th October: Oracle Technology Network MySQL Developer Day

I will be presenting on MySQL Cluster and MySQL Replication at the Oracle Technical Network MySQL Developer day in London on Tuesday, 18 October 2011 (8:30 AM – 4:00 PM). It’s free but you need to register here while there are still places (attendance has been extremely high at other locations).

The MySQL Developer Day is a one-stop shop for you to learn all the essential MySQL skills. In this free, one-day seminar, we will cover everything you need to know to successfully design, develop, and manage your MySQL databases. You’ll also learn the guidelines and best practices in performance tuning and scalability.

Attend this event and gain the knowledge to:

  • Develop your new applications cost-effectively using MySQL
  • Improve performance of your existing MySQL databases
  • Manage your MySQL environment more efficiently

Agenda:

8:30 a.m. – 9:30 a.m. Registration
9:30 a.m. – 10:30 a.m. Keynote: MySQL Essentials
10:30 a.m. – 11:30 p.m. Session: MySQL Replication and Scalabilit
11:30 a.m. – 11:45 a.m. Break
11:45 a.m. – 12:45 p.m. Session: MySQL Cluster
12:45 p.m. – 1:30 p.m. Lunch
1:30 p.m. – 2:30 p.m. Session: MySQL Administration and Management Tools
2:30 p.m. – 3:30 p.m. Session: MySQL Performance Tuning
3:30 p.m. – 4:00 p.m. Close

If you’re going to be in the area, please register and come along – would be good to meet as many of you as possible.





Delayed Replication in MySQL 5.6 Development Release

The new Development Release for MySQL 5.6 contains a great feature that our users have been asking for for a while (work log 344 first raised in 2010!) – delayed replication.

Stop mistake being propagated

The concept (and as you’ll see the execution) is extremely simple. If a user makes a mistake on the master – such as dropping some critical tables – then we want to give them the opportunity to recover the situation by using the data held on one of the slaves. The problem is that the slave is busily trying to keep up with the master and in all likelihood will have dropped these tables before the user has time to pull the plug on the replication stream. What this feature does is give the DBA the option to introduce a configurable delay into the replication process so that they have time to cut it off before the mistake is propagated.

This blog explains how this works, how to set that up and then how to bring the slave up to date (to the point in time just before the mistake was made on the master).

MySQL Replication Implementation

To understand how this is implemented, it helps to have a little bit of background on how MySQL replication is implemented. When a change is made on the master, it is applied to the master’s local disk copy and then written to the binary log. The change is then asynchronously (but normally immediately) copied from the master’s binary log to the relay log on the slave; from there an SQL thread on the slave will read the change from the relay log and apply it to the slave’s copy of the data.

This feature works by allowing the user to configure a delay between when the change is applied on the master and when that change is taken from the relay log and applied to the slave. Note that if the master fails during this delay period then the change is not lost as it is has already been safely recorded in the slave’s relay log.

Immediate & Delayed Replication

As the delay is implemented on the slave, you are free to use ‘real-time’ replication to one slave (to allow the fastest possible failover if the master fails) and delayed replication to a second slave to guard against user error. This is the setup that this post steps through.

For simplicity, all three MySQL Servers will be run on a single host but each uses a different port number as shown in the diagram. “slave” will apply changes as quickly as it can while “slave2” will introduce a delay when applying changes from its relay log.

Setting up the first slave is very standard:

master> CREATE USER repl_user@localhost;
master> GRANT REPLICATION SLAVE ON *.* TO 
  repl_user@localhost IDENTIFIED BY 'pw';
slave> CHANGE MASTER TO
 -> MASTER_HOST = 'localhost',
 -> MASTER_PORT = 3306,
 -> MASTER_USER = 'repl_user',
 -> MASTER_PASSWORD = 'pw';
slave> start slave;
When setting up the delayed slave, one extra parameter is included in the CHANGE MASTER command:
slave2> CHANGE MASTER TO
 -> MASTER_HOST = 'localhost',
 -> MASTER_PORT = 3306,
 -> MASTER_USER = 'repl_user',
 -> MASTER_PASSWORD = 'pw',
 -> MASTER_DELAY = 20;
slave2> START SLAVE;
The MASTER_DELAY parameter indicates the delay in seconds (here I’ve used 20 seconds – in a production environment you’d probably want to give your self much longer.
Prior to this, a simple table had already been created:
master> CREATE DATABASE clusterdb;USE clusterdb;
master> CREATE TABLE towns (Town VARCHAR(20));
Next we check that changes are immediately applied on slave while delayed on slave2:
master> REPLACE INTO towns VALUES ("Maidenhead"),("Bray");
slave> SELECT * FROM towns;
 +------------+
 | Town       |
 +------------+
 | Maidenhead |
 | Bray       |
 +------------+
slave2> SELECT * FROM towns;
 Empty set (0.00 sec)

slave2> SELECT * FROM towns;
 +------------+
 | Town       |
 +------------+
 | Maidenhead |
 | Bray       |
 +------------+
The next step is to simulate a user error; I add some more data to the table and then drop the table. Following on from that I stop the replication on slave2 before the mistake is applied:
master> REPLACE INTO towns VALUES ("Cookham"),("Marlow");
master> DROP TABLE towns;
slave> SELECT * FROM tables;
     ERROR 1146 (42S02): Table 'clusterdb.tables' doesn't exist
slave2> STOP SLAVE;
slave2> SELECT * FROM towns;
 +------------+
 | Town       |
 +------------+
 | Maidenhead |
 | Bray       |
 +------------+

Bring the Slave up to the Desired Point in Time

This is a good start, while slave has dropped the table, it still exists on slave2. Unfortunately, slave2 is missing the additions to the table that were made just before the mistake was made. The next step is to bring slave 2 almost up to date – stopping just before the table was dropped. To do this we need to find the position within the master’s binary log just before the table was dropped – this can be done using the SHOW BINLOG EVENTS command on the master. Once we have that position (file-name + position) we can tell slave 2 to catch up just to that point using START SLAVE UNTIL . Once that has been done, I check that the extra 2 inserts have been applied to slave2:

master> SHOW BINLOG EVENTSG
....
*************************** 10. row ***************************
 Log_name: ws2-bin.000001
 Pos: 842
 Event_type: Query
 Server_id: 1
 End_log_pos: 957
 Info: use `clusterdb`; REPLACE INTO towns VALUES ("Cookham"),("Marlow")
 *************************** 11. row ***************************
 Log_name: ws2-bin.000001
 Pos: 957
 Event_type: Xid
 Server_id: 1
 End_log_pos: 984
 Info: COMMIT /* xid=32 */
 *************************** 12. row ***************************
 Log_name: ws2-bin.000001
 Pos: 984
 Event_type: Query
 Server_id: 1
 End_log_pos: 1096
 Info: use `clusterdb`; DROP TABLE `towns` /* generated by server */
slave2> START SLAVE UNTIL
 -> MASTER_LOG_FILE='ws2-bin.000001',
 -> MASTER_LOG_POS=984;
slave2> SELECT * FROM towns;
 +------------+
 | Town       |
 +------------+
 | Maidenhead |
 | Bray       |
 | Cookham    |
 | Marlow     |
 +------------+

Success! Now slave2 contains exactly the data we need. After this it’s up to you what to do next; typically this could involve promoting slave2 to be the new master.

If you want to try this out for yourselves then you can download the MySQL 5.6 Milestone Development Release from dev.mysql.com (select the Development Maintenance Release sub-tab to get MySQL 5.6).





Feature Preview – Multi-Threaded Replication Slaves

This week, MySQL released a feature-preview – the ability to have multiple threads on the slave so that it is better able to keep pace with the updates being applied on the master. To simplify potential sequencing issues, all updates for a particular database will be handled by the same thread – in other words you need to make sure that your application uses multiple databases in order to see the benefits.
Luís Soares (from the development team) has written a great Blog going into the details.





Almost here – MySQL Cluster at Collaborate 11


A quick reminder that MySQL is well represented at the Oracle Collaborate conference which starts in Orlando on Sunday.

For those not familiar with Collaborate, it’s the big community conference for Oracle users – this year it’s in Orlando from April 10th through 14th (I’ve just re-checked the weather forecast, 31 Celsius vs. -18 at the last conference I presented at – OOW Beijing in December – what a difference 4 months and 8,000 miles make!).

I’ll be presenting on MySQL Cluster in a session called “Building Highly Available Scalable Real-Time Services with MySQL Cluster” where I’ll focus on:

  • Basics of MySQL Cluster – what it does, who uses it and why
  • Accessing your data – SQL and NoSQL access methods
  • Latest features
  • What’s coming in the future.

My session starts at 8:00 am on Tuesday 12th April (sorry for the early start) and is in room 306A.

For people interested in MySQL Cluster, another session you should try to attend is “MySQL Cluster for the Enterprise” presented by Craig Russell at 2:15 pm on Wednesday 13th April.

Other MySQL HA topics from the Oracle team:

To get an overall picture of what is happening to MySQL in Oracle, you should attend Tomas Ulin’s (VP of MySQL Engineering) “The State of MySQL” session at 9:15 am on Monday 11th April.

You can see a full list of sessions in the MySQL track here.

And last but not least, come and visit us at the MySQL booths in the Oracle Demo Grounds (Booth #657) to chat with us and/or get a demo. Here are the opening times:

  • Monday 6:00pm – 8:00pm (Welcome Reception)
  • Tuesday 10:15am – 4:00pm & 5:30pm-7:00pm (Reception)
  • Wednesday 10:15am – 4:00pm

I’ll be at the demo booth as much as possible but definitely for the 6:00pm – 8:00pm slot on Monday and from 10:15am – 1pm on Wednesday – hope to see some of you there.

Register for the event at http://collaborate11.ioug.org/Home/Registration/tabid/82/Default.aspx





Want to attend a MySQL 5.5 seminar?

Oracle University has added a new course to its training catalog “Introduction to MySQL 5.5” which is a day-long seminar. The seminar goes into some detail on many aspects of using MySQL and of course pays particular attention to the new features in MySQL 5.5. I’ve reviewed the material and can assure you that there is plenty of it!

Of most relevance to this blog is the overview of MySQL Cluster (which isn’t a big focus of the seminar as Cluster is currently using MySQL 5.1) and MySQL replication – the highest profile 5.5 feature being asynchronous replication which can make sure that zero updates are lost even if the master fails catastrophically.

At the time of writing, neither the on-line and instructor-led sessions have been scheduled and so you should register an interest here. When OU have sufficient registrants they’ll schedule the sessions (note that unlike webinars, white papers etc. there is a charge for this training).

Here’s the official description:

Introduction to MySQL 5.5

Duration: 1 Day
What you will learn
This one–day seminar covers all the new features and other key enhancements to MySQL 5.5 and the MySQL Enterprise Edition, including Performance, Scalability, Availability and Backups. Instructor lecture is supported by live demos as necessary. By attending this course, you learn how to plan your use of the MySQL 5.5 product release more effectively.
Students who can benefit from this course:
New users of MySQL, who have little or no previous experience with a relational database management system.
Existing MySQL users who are interested in learning about the new functionality possible with the MySQL 5.5 Release
Learn to:
Plan your use of the mySQL 5.5 product release more effectively
Audience
Database Administrators
Database Designers
Prerequisites
Basic computer literacy is required
Knowledge of database concepts
Previous experience with any command-line program
Course Objectives
Understand the features and benefits of MySQL
Recognize new MySQL 5.5 features
Understand how MySQL Enterprise Monitor and MySQL Query Analyzer alerts DBA to potential problems, queries and tuning opportunities before they impact key systems or applications
Understand how MySQL Enterprise Monitor and MySQL Query Analyzer works with MySQL databases
Distinguish how MySQL Workbench provides GUI-based data modeling, SQL development, deployment, and comprehensive administrative tools
Understand Replication features and functionality
Recognize how to supports full, incremental and partial backups with compression as well as point-in-time recovery
Course Topics

Introduction
Features and Benefits of MySQL
MySQL Products and Service
MySQL Community Edition vs. MySQL Enterprise Edition
MySQL Certification Program
MySQL Website
MySQL Architecture
How do I upgrade to MySQL 5.5
Whats New in MySQL 5.5
Introducing InnoDB as MySQL’s Default Storage Engine
Performance and Scalability and Benchmarks
Improved Availability
Improved Manageability and Efficiency
Improved Usability
Improved Instrumentation and Diagnostics
MySQL Production Ready Software and Support
MySQL Administration
Enterprise Monitor and Query Analyser
MySQL Workbench (server configuration, user administration, object management)
MySQL 5.5 Replication Enhancements
Overview of MySQL Replication
MySQL 5.5 Replication Features
Users Wants and Needs
Replication Enhancements in MySQL 5.5
What’s Cooking in the Replication Labs
Getting Started with MySQL 5.5 Replication
MySQL Enterprise Backup
Database Backup Overview
MySQL Enterprise Backup Features and Benefits
Database Backup Types: Comparison
MySQL Enterprise Backup: how it Works





MySQL (including Cluster) at the 2010 UK Oracle User Group Conference

The UK Oracle User Group Conference runs from 29th November through 1st December 2010 – the MySQL content is on Wednesday 1st at the ICC in Birmingham. This is a great chance for MySQL users to find out more on the latest developments and question some of the experts (both from inside and outside Oracle) and for Oracle users who don’t have experience with MySQL to discover what it’s all about.

The day starts with a keynote from Rich Mason (VP responsible for MySQL sales) at 8:45; I’ll be part of a MySQL panel at 9:50 and then Mat Keep and I will be presenting on MySQL Cluster at 14:15. In the Cluster session we’ll will introduce MySQL Cluster and cover the most important features of MySQL Cluster 7.1: ndbinfo; MySQL Cluster Connector/Java and other features that push the limits of MySQL Cluster into new workloads and communities.

ndbinfo presents real-time usage statistics from the MySQL Cluster data nodes as a series of SQL tables, enabling developers and administrators to monitor database performance and optimize their applications.

Designed for Java developers, the MySQL Cluster Connector for Java implements an easy-to-use and high performance native Java interface and OpenJPA plug-in that maps Java classes to tables stored in the MySQL Cluster database.

Finally, we’ll present some real-world case studies and explain some of the new capabilities that are currently in development.

Another session that’s likely to be suited to people interested in MySQL Cluster is a more general presentation on delivering High Availability with MySQL – presented by Mario Beck.

As you’d expect there are many more MySQL sessions to choose from – check the full schedule.

To register for just the MySQL sessions use the MySQL registration page, the full conference registration page is here.

Hope to see as many of you there as possible!