Archive for MySQL Cluster

MySQL Cluster Manager 1.1 available!

As the title of this post suggests, MySQL Cluster Manager 1.1 is now available – but this actually has a double meaning:

  1. MySQL Cluster Manager 1.1 is GA (I’ll explain below the major improvements over 1.0)
  2. Everyone is now able to download and try it (without first having to purchase a license)!

This software is only available through commercial licenses (i.e. not GPL like the rest of Cluster) and until recently there was no way for anyone to try it out unless they had already bought MySQL Cluster CGE; this changed on Monday when the MySQL software became available through Oracle’s E-Delivery site. Now you can download the software and try it out for yourselves (just select “MySQL Database” as the product pack, select your platform, click “Go” and then scroll down to get the software).

So What is MySQL Cluster Manager?

MySQL Cluster Manager provides the ability to control the entire cluster as a single entity, while also supporting very granular control down to individual processes within the cluster itself.  Administrators are able to create and delete entire clusters, and to start, stop and restart the cluster with a single command.  As a result, administrators no longer need to manually restart each data node in turn, in the correct sequence, or to create custom scripts to automate the process.

MySQL Cluster Manager automates on-line management operations, including the upgrade, downgrade and reconfiguration of running clusters as well as adding nodes on-line for dynamic, on-demand scalability, without interrupting applications or clients accessing the database.  Administrators no longer need to manually edit configuration files and distribute them to other cluster nodes, or to determine if rolling restarts are required. MySQL Cluster Manager handles all of these tasks, thereby enforcing best practices and making on-line operations significantly simpler, faster and less error-prone.

MySQL Cluster Manager is able to monitor cluster health at both an Operating System and per-process level by automatically polling each node in the cluster.  It can detect if a process or server host is alive, dead or has hung, allowing for faster problem detection, resolution and recovery.

To deliver 99.999% availability, MySQL Cluster has the capability to self-heal from failures by automatically restarting failed Data Nodes, without manual intervention.  MySQL Cluster Manager extends this functionality by also monitoring and automatically recovering SQL and Management Nodes.  

How is it Implemented?

MySQL Cluster Manager Architecture

MySQL Cluster Manager is implemented as a series of agent processes that co-operate with each other to manage the MySQL Cluster deployment; one agent running on each host machine that will be running a MySQL Cluster node (process). The administrator uses the regular mysql command to connect to any one of the agents using the port number of the agent (defaults to 1862 compared to the MySQL Server default of 3306).

How is it Used?

When using MySQL Cluster Manager to manage your MySQL Cluster deployment, the administrator no longer edits the configuration files (for example config.ini and my.cnf); instead, these files are created and maintained by the agents. In fact, if those files are manually edited, the changes will be overwritten by the configuration information which is held within the agents. Each agent stores all of the cluster configuration data, but it only creates the configuration files that are required for the nodes that are configured to run on that host.

Similarly when using MySQL Cluster Manager, management actions must not be performed by the administrator using the ndb_mgm command (which directly connects to the management node meaning that the agents themselves would not have visibility of any operations performed with it).

When using MySQL Cluster Manager, the ‘angel’ processes are no longer needed (or created) for the data nodes, as it becomes the responsibility of the agents to detect the failure of the data nodes and recreate them as required. Additionally, the agents extend this functionality to include the management nodes and MySQL Server nodes.

Example 1: Create a Cluster from Scratch

The first step is to connect to one of the agents and then define the set of hosts that will be used for the Cluster:

$ mysql -h 192.168.0.10 -P 1862 -u admin -psuper --prompt='mcm> '
mcm> create site --hosts=192.168.0.10,192.168.0.11,192.168.0.12,192.168.0.13 mysite;

Next step is to tell the agents where they can find the Cluster binaries that are going to be used, define what the Cluster will look like (which nodes/processes will run on which hosts) and then start the Cluster:

mcm> add package --basedir=/usr/local/mysql_6_3_27a 6.3.27a;
mcm> create cluster --package=6.3.26
--processhosts=ndb_mgmd@192.168.0.10,ndb_mgmd@192.168.0.11,
ndbd@192.168.0.12,ndbd@192.168.0.13,ndbd@192.168.0.12,
ndbd@192.168.0.13,mysqld@192.168.0.10,mysqld@192.168.0.11 mycluster;
mcm> start cluster mycluster; 

Example 2: On-Line upgrade of a Cluster

A great example of how MySQL Cluster Manager can simplify management operations is upgrading the Cluster software. If performing the upgrade by hand then there are dozens of steps to run through which is time consuming, tedious and subject to human error (for example, restarting nodes in the wrong order could result in an outage). With MySQL Cluster Manager, it is reduced to two commands – define where to find the new version of the software and then perform the rolling, in-service upgrade:

mcm> add package --basedir=/usr/local/mysql_7_1_8 7.1.8;
mcm> upgrade cluster --package=7.1.8 mycluster;

Behind the scenes, each node will be halted and then restarted with the new version – ensuring that there is no loss of service.

What’s New in MySQL Cluster Manager 1.1

If you’ve previously tried out version 1.0 then the main improvements you’ll see in 1.1 are:

  • More robust; 1.0 was the first release and a lot of bug fixes have gone in since then
  • Optimized restarts – more selective about which nodes need to be restarted when making a configuration change
  • Automated On-line Add-node

MySQL Cluster Manager 1.1 – Automated On-Line Add-Node

Automated On-Line Add-Node

Since MySQL Cluster 7.0 it has been possible to add new nodes to a Cluster while it is still in service; there are a number of steps involved and as with on-line upgrades if the administrator makes a mistake then it could lead to an outage.

We’ll now look at how this is automated when using MySQL Cluster Manager; the first step is to add any new hosts (servers) to the site and indicate where those hosts can find the Cluster software:

mcm> add hosts --hosts=192.168.0.14,192.168.0.15 mysite;
mcm> add package --basedir=/usr/local/mysql_7_1_8 --hosts=192.168.0.14,192.168.0.15 7_1_8;

The new nodes can then be added to the Cluster and then started up:

mcm> add process --processhosts=mysqld@192.168.0.10,mysqld@192.168.0.11,ndbd@192.168.0.14,
ndbd@192.168.0.15,ndbd@192.168.0.14,ndbd@192.168.0.15 mycluster;
mcm> start process --added mycluster; 

The Cluster has now been extended but you need to perform a final step from any of the MySQL Servers to repartition the existing Cluster tables to use the new data nodes:

mysql> ALTER ONLINE TABLE <table-name> REORGANIZE PARTITION;
mysql> OPTIMIZE TABLE <table-name>;

Where can I found out more?

There is a lot of extra information to help you understand what can be achieved with MySQL Cluster Manager and how to use it:





MySQL Cluster Demo from Oracle OpenWorld

If you weren’t able to attend Oracle OpenWorld or didn’t get chance to  visit the MySQL demo booths then you can watch a recording of the demo here. The video gives a brief description of MySQL Cluster and then dives into a series of demos showing MySQL Cluster’s ability to cope with a number of events without losing service – including node failure, on-line upgrades and on-line horizontal scaling.

To view a higher quality version click here.





Glimpse of the future – massively improved JOIN performance for MySQL Cluster

MySQL Cluster Push-Down Joins

A great chance to find out about the “SPJ” project that’s under way to improve the performance of complex table JOINs. A free webinar has been scheduled for Thursday, November 04 at 09:00 Pacific time (16:00 UK; 17:00 Central European Time) – just register for the webinar at mysql.com. Even if you can’t attend, by registering you’ll get an email telling you where to watch the replay shortly after the webinar.

MySQL Cluster performance has always been extremely high and scalable when the work load is primarily primary key reads and write but complex JOINS (many tables in the JOIN and/or a large number of results from the first part of the query) have traditionally been much slower than when using other storage engines. Work is underway to address this and SPJ is the name we’ve been using.

Traditionally, JOINs have been performed as Nested Loop JOINs in the MySQL Server which is fine when all of the data is held there (e.g. MyISAM) but when the data is held externally (in Cluster’s case, in the data nodes) it can result in a massive amount of messaging. SPJ works by pushing the processing of JOINs down into the data nodes where they can be performed much more efficiently as the data is local.

As well as finding out about the implementation, you’ll also learn:

  • What queries benefit -> how you might tweak you application to get the biggest benefits
  • The kind of performance improvements you might expect to see
  • How to try the (pre-GA!) software for yourself.

Note that (as always) Oracle reserves the right to alter the timing and/or existence of new product releases.





MySQL Cluster 7.1.8 binaries released

The binary version for MySQL Cluster 7.1.8 has now been made available at http://www.mysql.com/downloads/cluster/

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.1.8 (compared to 7.1.5) can be found in the official MySQL Cluster documentation.





Free webinar: Delivering Scalability and High Availability with MySQL 5.5 Replication Enhancements

Simple Master -> Slave Replication


MySQL 5.5 (currently at Release Candidate status) introduces a lot of enhancements to replication, making it even more robust and easy to manage. At 9:00 am PST (5:00 pm UK, 6pm CET) you have the chance to hear the details from the replication development manager (Dr Lars Thalmann) as well as get some of your replication questions answered.

As always, attendance is free but you must click here to register in advance.





Get MySQL Replication up and running in 5 minutes

MySQL Replication is incredibly simple to get up and running and this (short) post steps you through it.

Simple Master -> Slave Replication

MySQL allows you to build up complex replication hierarchies, such as multi-master, chains of read slaves, backup databases at a remote site or any combination of these. This post focuses on a simple single master to single slave topology – the more complex solutions are built from this basic building block.

This post also makes the assumption that the 2 MySQL Servers have been installed but that there is no existing data in the master that needs to be copied to the slave – it’s not complex to add that extra requirement and it will be covered in a future post.

Server “black” (192.168.0.31) is to be our master and “blue” (192.168.0.34) the slave.

Step 1: Edit the configuration files & start the MySQL Servers

The first step in setting up replication involves editing the “my.cnf” file on the servers that will serve as the master and slave. A default is provided with the MySQL installation but in case there is already a production MySQL database running on these servers, we provide local configuration files “master.cnf” and “slave.cnf” that will be used when starting up the MySQL servers.

At a minimum we’ll want to add two options to the [mysqld] section of the master.cnf file:

  • log-bin: in this example we choose black-bin.log
  • server-id: in this example we choose 1. The server cannot act as a replication master unless binary logging is enabled. The server_id variable must be a positive integer value between 1 to 2^32

master.cnf:

[mysqld]
server-id=1
log-bin=black-bin.log
datadir=/home/billy/mysql/master/data
innodb_flush_log_at_trx_commit=1
sync_binlog=1

Note: For the greatest possible durability and consistency in a replication setup using InnoDB with transactions, you should also specify the innodb_flush_log_at_trx_commit=1, sync_binlog=1 options.

Next, you’ll need to add the server-id option to the [mysqld] section of the slave’s slave.cnf file. The server-id value, like the master_id value, must be a positive integer between 1 to 2^32, It is also necessary that the ID of the slave be different from the ID of the master. If you are setting up multiple slaves, each one must have a unique server-id value that differs from that of the master and from each of the other slaves.

slave.cnf:

[mysqld]
server-id=2
relay-log-index=slave-relay-bin.index
relay-log=slave-relay-bin
datadir=/home/billy/mysql/slave/data

Now, start the MySQL servers using the service manager or directly from the command line if not being run as a service:

[billy@black ~]$ mysqld --defaults-file=/home/billy/mysql/master/master.cnf &
[billy@blue ~]$ mysqld --defaults-file=/home/billy/mysql/slave/slave.cnf&

Step 2: Create Replication User

Create an account on the master server that the slave server can use to connect. This account must be given the REPLICATION SLAVE privilege:

[billy@black ~]$ mysql -u root --prompt='master> '
master> CREATE USER repl_user@192.168.0.34;
master> GRANT REPLICATION SLAVE ON *.* TO repl_user@192.168.0.34 IDENTIFIED BY 'billy';

Step 3: Initialize Replication

We are now ready to initialize replication on the slave; issue a CHANGE MASTER command:

slave> CHANGE MASTER TO MASTER_HOST='192.168.0.31',
 -> MASTER_USER='repl_user',
 -> MASTER_PASSWORD='billy',
 -> MASTER_LOG_FILE='',
 -> MASTER_LOG_POS=4;

Where:

  • MASTER_HOST: the IP or hostname of the master server, in this example blue or 192.168.0.31
  • MASTER_USER: this is the user we granted the REPLICATION SLAVE privilege to in Step 2, in this example, “repl_user”
  • MASTER_PASSWORD: this is the password we assigned to ”rep_user” in Step 2
  • MASTER_LOG_FILE: is an empty string (wouldn’t be empty if there were existing writes to be picked up from the master)
  • MASTER_LOG_POS: is 4 (would likely be different if there were existing writes to be picked up from the master)

Finally, start replication on the slave:

slave> start slave;

Step 4: Basic Checks

Now we are ready to perform a basic check to ensure that replication is indeed working. In this example we insert a row of data into the “simples” table on the master server and then verify that these new rows materialize on the slave server:

master> create database clusterdb;
master> create table clusterdb.simples (id int not null primary key) engine=ndb;
master> insert into clusterdb.simples values (999),(1),(2),(3);
slave> select * from clusterdb.simples;
 +-----+
 | id  |
 +-----+
 |   1 |
 |   2 |
 |   3 |
 | 999 |
 +-----+




MySQL Cluster session from Oracle OpenWorld – replay is available.

As part of “MySQL Sunday” at this year’s Oracle Open World, Mat Keep and I  presented on the MySQL Cluster architecture and the latest features. If you weren’t able to attend then you can watch/listen to the replay here.





5 Steps to get started then scale with MySQL Cluster – webinar replay available

Last week’s webinar on getting started with MySQL Cluster and the scaling it out is now available to download from http://www.mysql.com/news-and-events/on-demand-webinars/display-od-566.html

MySQL Cluster - getting started webinar

It was great that so many people were able to attend live but if you missed it then the replay is a good way to catch up.

This session demonstrates how to start an evaluation of the MySQL Cluster database in 5 easy steps, and then how to expand your deployment for web & telecoms-scale services.
The getting started section covers:
  1. Getting the software
  2. Installing it
  3. Configuring it
  4. Running it
  5. Testing it
Scaling for HA and the web describes how to:
  1. Review the requirements for a HA configuration
  2. Install the software on more servers
  3. Update & extend the configuration from a single host to 4
  4. Roll out the changes
  5. On-line scaling to add further nodes

After a quick introduction, the webinar is very much a hands-on demonstration of these steps; if you want to try it for yourself then I’d suggest also downloading the MySQL Cluster quick start guide for your platform where you’ll also find a Flash video showing the creation of a MySQL Cluster in real-time.





MySQL Cluster sessions at Oracle OpenWorld next week

As part of “MySQL Sunday” at this year’s Oracle Open World, Mat Keep and I will be presenting on the latest MySQL Cluster features. We’ll be presenting at 15:30 (Pacific Time) on 19th September (the event starts with a key note at 12:30).

If you’re attending Oracle Open World then please indicate that you’d like to attend the MySQL Sunday when you register. If you aren’t planning to go to Oracle Open World but will be in the San Francisco area then buying a Discover pass ($75 in advance or $125 on the day) will get you into the MySQL Sunday sessions. Register here.

For details on the presentations and speakers, check here.

In addition, if you’re at the conference on Monday, Tuesday or Wednesday, please pop round and visit us at our stand at Moscone West W-071 where we have a demonstration of MySQL Cluster High Availability features including on-line upgrades and scaling together with automated failover and recovery.





MySQL Cluster documentation on iPhone/iPad

MySQL Cluster in iBooks

MySQL documentation has now been made available in ePub format (fetch the MySQL Cluster versions from http://dev.mysql.com/doc/index-cluster.html ). This format is intended for various book readers.

I’ve tried it on 2 iPhone applications; Apple’s iBooks and the iPhone version of Stanza. The documents render well in both applications but as with most reference books you get the best results if you reduce the font size so that you get more on the screen at once.

With both applications, you can add the books through iTunes but Stanza also has the advantage that if you browse to the document in Opera on the iPhone then it gives you the option of opening it in Stanza – cutting out the need to use iTunes. I also prefer the rendering in Stanza. The only problems I’ve come across with Stanza is that you have to add the cover images yourself (if you care) – not a huge problem.

Page in iBooks

MySQL Cluster docs in Stanza library

Page in Stanza

Browse MySQL Cluster ePub docs on iPhone