Tag Archive for MySQL Cluster

MySQL Cluster 7.2.9 Released

The binary version for MySQL Cluster 7.2.9 has now been made available at http://www.mysql.com/downloads/cluster/ (GPL version) or https://support.oracle.com/ (commercial version).

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.2.9 (compared to 7.2.8) is available from the 7.2.9 Change log.





MySQL Cluster Auto-Installer – labs release

Deploying a well configured cluster has just got a lot easier! Oracle have released a new auto-installer/configurator for MySQL Cluster that makes the processes extremely simple while making sure that the cluster is well configured for your application. The installer is part of MySQL Cluster 7.3 and so is not yet GA but it can also be used on MySQL Cluster 7.2. A single command launches the web-based wizard which then steps you through configuring the cluster; to keep things even simpler, it will automatically detect the resources on your target machines and use these results together with the type of workload you specify in order to determine values for the key configuration parameters.

Tutorial Video

Before going through the detailed steps, here’s a demonstration of the auto-installer in action…

Downloading and running the wizard

The software can be downloaded from MySQL Labs; just select the MySQL-Cluster-Auto-Installer build, unzip the file and then run. To run on Windows, just double click setup.bat – note that if you installed from the MSI and didn’t change the install directory then this will be located somewhere like C:Program Files (x86)MySQLMySQL Cluster 7.2. On Linux, just run ndb_setup.

Creating your cluster

MySQL Cluster auto-installer landing page

Landing page

When you run the auto-installer it starts a small web server and then (if possible) automatically connects your web browser to it – presenting you with the first page of the wizard. If this isn’t possible (for example the server isn’t running a desktop environment), then you can connect to it remotely using the URL http://your-server-name-goes-here:8081/index.html. It may take a number of seconds to load and so please be patient. Note that the machine where you run this doesn’t need to be a host that will be included in the cluster.

From the landing page, just click on the “Create new MySQL Cluster” icon to get started.

On the next page you need to specify the list of servers that will form part of the cluster. The machine where the installer is being run from needs to have ssh access to all of the cluster hosts (further, access to those machines must already have been approved from this one – if you’re uncertain, just manually connect to each one using an ssh client.

By default, the wizard assumes that ssh keys have been set up (so that a password isn’t needed) – if that isn’t the case, just un-check the checkbox and provide your username and password.

On this page, you also get to specify what “type” of cluster you want; if you’re experimenting for the first time then it’s probably safest to stick with “Simple testing” but for a production system you’d want to specify the kind of application and whether it will a write-intensive application.

Auto-discovery of target host resources

Auto-discovery of target host resources

On the next page, you will see the wizard attempt to auto-detect the resources on your target machines. If this fails then you can enter the data manually.

You can also overwrite the resource-values (for example, if you don’t want the cluster to use up a big share of the memory on the target systems then just overwrite the amount of memory.





Overwrite the default directories on the target systems

Overwriting the default directories on the target systems

It’s also on this page that you can specify where the MySQL Cluster software is stored on each of the hosts (if the defaults aren’t correct) – this should be the path to where you unzipped the MySQL Cluster tar-ball/zip file – as well as where the data (and configuration files) should be stored. You can just overwrite the values or select multiple rows and hit the “edit” button.







Defining processes

Defining processes

The following page presents you with a default set of nodes (processes) and how they’ll be distributed across all of the target hosts – if you’re happy with the proposal then just advance to the next page. So what can you change:

  • Add extra nodes
  • Move nodes from one host to another (just drag and drop)
  • Delete nodes
  • Change a node from one type to another


Add process

Add process

The diagram to the right shows an example of adding an extra MySQL Server.












Optionally override recommended configuration parameters

Optionally override recommended configuration parameters

On the next screen you’re presented with some of the key configuration parameters that have been set (behind the scenes, the wizard sets many more) that you might want to override; if you’re happy then just progress to the next screen. If you do want to make any changes then make them here before continuing. If you’d previously selected anything other than “simple” for the kind of cluster to create then you can check the “Show advanced configuration options” box in order to view/modify more parameters.




Deployment in progress

Deployment in progress

On the final screen you can review the details of the final recommended configuration and then just hit “Deploy and start cluster” and it will do just that. Depending on the complexity of the cluster, it can take a while to deploy and start everything but you’re shown a progress bar together with an explanation of what stage the process is at.

If for some reason you prefer or need to start the processes manually, this page also shows you the commands that you’d need to run (as well as the configuration files if you need to create them manually).

Once the wizard declares the process complete, you can check for yourself before going ahead and start your testing:

billy@black:~ $ ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @192.168.1.106  (mysql-5.5.25 ndb-7.2.8, Nodegroup: 0, Master)
id=2    @192.168.1.107  (mysql-5.5.25 ndb-7.2.8, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=49   @192.168.1.104  (mysql-5.5.25 ndb-7.2.8)
id=52   @192.168.1.105  (mysql-5.5.25 ndb-7.2.8)

[mysqld(API)]   9 node(s)
id=50 (not connected, accepting connect from 192.168.1.104)
id=51 (not connected, accepting connect from 192.168.1.104)
id=53 (not connected, accepting connect from 192.168.1.105)
id=54 (not connected, accepting connect from 192.168.1.105)
id=55   @192.168.1.104  (mysql-5.5.25 ndb-7.2.8)
id=56   @192.168.1.104  (mysql-5.5.25 ndb-7.2.8)
id=57   @192.168.1.105  (mysql-5.5.25 ndb-7.2.8)
id=58   @192.168.1.105  (mysql-5.5.25 ndb-7.2.8)
id=59   @192.168.1.106  (mysql-5.5.25 ndb-7.2.8)

As always it would be great to hear some feedback especially if you’ve ideas on improving it or if you hit any problems.





MySQL Cluster 7.2.8 Released

The binary version for MySQL Cluster 7.2.8 has now been made available at http://www.mysql.com/downloads/cluster/ (GPL version) or https://support.oracle.com/ (commercial version).

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.2.8 (compared to 7.2.7) is available from the 7.2.8 Change log.





MySQL Cluster 7.1.23 has been released

The binary & source versions for MySQL Cluster 7.1.23 have now been made available at https://www.mysql.com/downloads/cluster/7.1.html#downloads (GPL version) or https://support.oracle.com/ (commercial version).

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.1.23 (compared to 7.1.22) are available from the 7.1.23 Change log.





MySQL Cluster 7.2.7 released

The binary version for MySQL Cluster 7.2.7 has now been made available at http://www.mysql.com/downloads/cluster/ (GPL version) or https://support.oracle.com/ (commercial version).

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.2.7 (compared to 7.2.6) are available from the 7.2.7 Change log.





Optimizing Performance of the MySQL Cluster Database – White Paper update

Engineering threads within a Data Node

Engineering threads within a Data Node

A new version of the white paper “Guide to Optimizing Performance of the MySQL Cluster Database” has been released; download it here.

This paper steps you through:

  • Identifying if your application is a good fit for MySQL Cluster
  • Measuring performance and identifying problem performance areas to address
  • Optimizing performance:
    • Access patterns
    • Using Adaptive Query Localization for complex Joins
    • Distribution aware applications
    • Batching operations
    • Schema optimizations
    • Query optimization
    • Parameter tuning
    • Connection pools
    • Multi-Threaded Data Nodes
    • Alternative APIs
    • Hardware enhancements
    • Miscellaneous
  • Scaling MySQL Cluster by Adding Nodes

As well as the kind of regular updates that are needed from time to time, this version includes the extra opportunities for optimizations that are available with MySQL Cluster 7.2 such as faster joins and engineering the threads within a multi-threaded data node.

As a reminder, I’ll be covering much of this material in an upcoming webinar.

MySQL Connect





MySQL Connect & Oracle OpenWorld 2012

Oracle OpenWorld 2012

Oracle OpenWorld 2012

I’m lucky enough to be involved in a number of sessions across Oracle OpenWorld as well as the (new for this year) MySQL Connect session that precedes it. MySQL Connect runs on Saturday 29th and Sunday 30th September and the Oracle OpenWorld on through Thursday October 4th.

The sessions I’ll be involved with are:

  • MySQL Cluster – From Zero to One Billion in Five Easy Steps: Of course it takes more than five steps to scale to more than one billion queries per minute, but the new configuration features of MySQL Cluster make it much simpler to provision and deploy MySQL Cluster on-premises or in the cloud, automatically optimized for your target use case. This BoF session is designed to give you a demo of the new features, showing how you can use them to quickly build your own proof of concept and then take that into production. The MySQL Cluster Engineering team will be on hand to answer your questions and also listen to the requirements you have for current or future MySQL Cluster projects. This is Birds-of-a-Feather session and is part of the MySQL Connect conference.
  • Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster: Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster from Oracle and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. This is a conference session and is part of the MySQL Connect conference.
  • Introduction to MySQL High-Availability Solutions: Databases are the center of today’s Web and enterprise applications, storing and protecting an organization’s most valuable assets and supporting business-critical applications. Just minutes of downtime can result in dissatisfied customers and significant loss of revenue. Ensuring database high availability is therefore a top priority for any organization. Attend this session to learn more about delivering high availability for MySQL-based services. It covers

    • The cause, effect, and impact of downtime
    • A methodology for mapping applications to the right high-availability solution
    • An overview of MySQL high availability, from replication to virtualization, clustering, and multisite redundancy
    • Operational best practices to ensure business continuity

    This is a conference session and is part of Oracle OpenWorld.

Early bird pricing is available for the MySQL Connect until 13th July – same date for Oracle OpenWorld.





MySQL Cluster running on Raspberry Pi

MySQL Cluster running on Raspberry Pi

MySQL Cluster running on Raspberry Pi

I start a long weekend tonight and it’s the kids’ last day of school before their school holidays and so last night felt like the right time to play a bit. This week I received my Raspberry Pi – if you haven’t heard of it then you should take a look at the Raspberry Pi FAQ – basically it’s a ridiculously cheap ($25 or $35 if you want the top of the range model) ARM based PC that’s the size of a credit card.

A knew I had to have one to play with but what to do with it? Why not start by porting MySQL Cluster onto it? We always claim that Cluster runs on commodity hardware – surely this would be the ultimate test of that claim.

I chose the customised version of Debian – you have to copy it onto the SD memory card that acts as the storage for the Pi. Once up and running on the Pi, the first step was to increase the size of the main storage partition – it starts at about 2 Gbytes – using gparted. I then had to compile MySQL Cluster – ARM isn’t a supported platform and so there are no pre-built binaries. I needed to install a couple of packages before I could get very far:

sudo apt-get update
sudo apt-get install cmake
sudo apt-get install libncurses5-dev

Compilation initially got about 80% through before failing and so if you try this yourself then save yourself some time by applying the patch from this bug report before starting. The build scripts wouldn’t work but I was able to just run make…

make
sudo make install

As I knew that memory was tight I tried to come up with a config.ini file that cut down on how much memory would be needed (note that 192.168.1.122 is the Raspberry Pi while 192.168.1.118 is an 8GByte Linux x86-64 PC – doesn’t seem a very fair match!):

[ndb_mgmd]
hostname=192.168.1.122
NodeId=1

[ndbd default]
noofreplicas=2
DataMemory=2M
IndexMemory=1M
DiskPageBufferMemory=4M
StringMemory=5
MaxNoOfConcurrentOperations=1K
MaxNoOfConcurrentTransactions=500
SharedGlobalMemory=500K
LongMessageBuffer=512K
MaxParallelScansPerFragment=16
MaxNoOfAttributes=100
MaxNoOfTables=20
MaxNoOfOrderedIndexes=20

[ndbd]
hostname=192.168.1.122
datadir=/home/pi/mysql/ndb_data
NodeId=3

[ndbd]
hostname=192.168.1.118
datadir=/home/billy/my_cluster/ndbd_data
NodeId=4

[mysqld]
NodeId=50

[mysqld]
NodeId=51

[mysqld]
NodeId=52

[mysqld]
NodeId=53

[mysqld]
NodeId=54

Running the management node worked pretty easily but then I had problems starting the data nodes – checking how much memory I had available gave me a hint as to why!

pi@raspberrypi:~$ free -m
             total       used       free     shared    buffers     cached
Mem:           186         29        157          0          1         11
-/+ buffers/cache:         16        169
Swap:            0          0          0

OK – so 157 Mbytes of memory available and no swap space, not ideal and so the next step was to use gparted again to create swap partitions on the SD card as well a massive 1Gbyte on my MySQL branded USB stick (need to persuade marketing to be a bit more generous with those). A quick edit of /etc/fstab and a restart and things were looking in better shape:

pi@raspberrypi:~$ free -m
             total       used       free     shared    buffers     cached
Mem:           186         29        157          0          1         11
-/+ buffers/cache:         16        169
Swap:         1981          0       1981

Next to start up the management node and 1 data node on the Pi as well as a second data node on the Linux server “ws2” (I want High Availability after all – OK so running the management node on the same host as a data node is a single point of failure)…

pi@raspberrypi:~/mysql$ ndb_mgmd -f conf/config.ini --configdir=/home/pi/mysql/conf/ --initial
pi@raspberrypi:~/mysql$ ndbd
billy@ws2:~$ ndbd -c 192.168.1.122:1186

I could then confirm that everything was up and running:

pi@raspberrypi:~$ ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=3    @192.168.1.122  (mysql-5.5.22 ndb-7.2.6, Nodegroup: 0, Master)
id=4    @192.168.1.118  (mysql-5.5.22 ndb-7.2.6, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.1.122  (mysql-5.5.22 ndb-7.2.6)

[mysqld(API)]   5 node(s)
id=50 (not connected, accepting connect from any host)
id=51 (not connected, accepting connect from any host)
id=52 (not connected, accepting connect from any host)
id=53 (not connected, accepting connect from any host)
id=54 (not connected, accepting connect from any host)

Cool!

Next step is to run a MySQL Server so that I can actually test the Cluster – if I tried running that on the Pi then it caused problems (157 Mbytes of RAM doesn’t stretch as far as it used to) – on ws2:

billy@ws2:~/my_cluster$ cat conf/my.cnf
[mysqld]
ndbcluster
datadir=/home/billy/my_cluster/mysqld_data
ndb-connectstring=192.168.1.122:1186

billy@ws2:~/my_cluster$ mysqld --defaults-file=conf/my.cnf&

Check that it really has connected to the Cluster:

pi@raspberrypi:~/mysql$ ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=3    @192.168.1.122  (mysql-5.5.22 ndb-7.2.6, Nodegroup: 0, Master)
id=4    @192.168.1.118  (mysql-5.5.22 ndb-7.2.6, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.1.122  (mysql-5.5.22 ndb-7.2.6)

[mysqld(API)]   5 node(s)
id=50   @192.168.1.118  (mysql-5.5.22 ndb-7.2.6)
id=51 (not connected, accepting connect from any host)
id=52 (not connected, accepting connect from any host)
id=53 (not connected, accepting connect from any host)
id=54 (not connected, accepting connect from any host)

Finally, just need to check that I can read and write data…

billy@ws2:~/my_cluster$ mysql -h 127.0.0.1 -P3306 -u root
mysql> CREATE DATABASE clusterdb;USE clusterdb;
Query OK, 1 row affected (0.24 sec)

Database changed
mysql> CREATE TABLE simples (id INT NOT NULL PRIMARY KEY) engine=ndb;
120601 13:30:20 [Note] NDB Binlog: CREATE TABLE Event: REPL$clusterdb/simples
Query OK, 0 rows affected (10.13 sec)

mysql> REPLACE INTO simples VALUES (1),(2),(3),(4);
Query OK, 4 rows affected (0.04 sec)
Records: 4  Duplicates: 0  Warnings: 0

mysql> SELECT * FROM simples;
+----+
| id |
+----+
|  1 |
|  2 |
|  4 |
|  3 |
+----+
4 rows in set (0.09 sec)

OK – so is there any real application to this? Well, probably not other than providing a cheap development environment – imagine scaling out to 48 data nodes, that would cost $1,680 (+ the cost of some SD cards)! More practically might be management nodes – we know that they need very few resources. As a reminder – this is not a supported platform!





MySQL Cluster Manager 1.1.6 released

MySQL Cluster Manager 1.1.6 is now available to download from My Oracle Support.

Details on the changes can will be added to the MySQL Cluster Manager documentation . Please give it a try and let me know what you think.

Note that if you’re not a commercial user then you can still download MySQL Cluster Manager 1.1.5 from the Oracle Software Delivery Cloud and try it out for free. Documentation is available here.





MySQL Cluster 7.1.22 is available for download

The binary version for MySQL Cluster 7.1,21 has now been made available at https://www.mysql.com/downloads/cluster/7.1.html#downloads (GPL version) or https://support.oracle.com/ (commercial version).

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.1.22 (compared to 7.1.21) are available from the 7.1.22 Change log.