Prashant Tekriwal MySQL Cluster 7.2.23 has been released
Jan 19, 2016; 23:42
MySQL Cluster 7.2.23 has been released
Dear MySQL Users,
MySQL Cluster is the distributed, shared-nothing variant of MySQL. This storage engine provides:
- In-Memory storage - Real-time performance (with optional checkpointing to disk) - Transparent Auto-Sharding - Read & write scalability - Active-Active/Multi-Master geographic replication - 99.999% High Availability with no single point of failure and on-line maintenance - NoSQL and SQL APIs (including C++, Java, http and Memcached)
MySQL Cluster 7.2.23, has been released and can be downloaded from
where you will also find Quick Start guides to help you get your first MySQL Cluster database up and running.
Changes in MySQL Cluster NDB 7.2.23 (5.5.47-ndb-7.2.23) (2016-01-19)
MySQL Cluster NDB 7.2.23 is a new release of MySQL Cluster, incorporating new features in the NDB storage engine, and fixing recently discovered bugs in previous MySQL Cluster NDB 7.2 development releases.
Obtaining MySQL Cluster NDB 7.2. MySQL Cluster NDB 7.2 source code and binaries can be obtained from http://dev.mysql.com/downloads/cluster/.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.5 through MySQL 5.5.47 (see Changes in MySQL 5.5.47 (2015-12-07) (http://dev.mysql.com/doc/relnotes/mysql/5.5/en/news-5-5-47.html)).
* In debug builds, a WAIT_EVENT while polling caused excessive logging to stdout. (Bug #22203672)
* When executing a schema operation such as CREATE TABLE on a MySQL Cluster with multiple SQL nodes, it was possible for the SQL node on which the operation was performed to time out while waiting for an acknowledgement from the others. This could occur when different SQL nodes had different settings for --ndb-log-updated-only, --ndb-log-update-as-write, or other mysqld options effecting binary logging by NDB. This happened due to the fact that, in order to distribute schema changes between them, all SQL nodes subscribe to changes in the ndb_schema system table, and that all SQL nodes are made aware of each others subscriptions by subscribing to TE_SUBSCRIBE and TE_UNSUBSCRIBE events. The names of events to subscribe to are constructed from the table names, adding REPL$ or REPLF$ as a prefix. REPLF$ is used when full binary logging is specified for the table. The issue described previously arose because different values for the options mentioned could lead to different events being subscribed to by different SQL nodes, meaning that all SQL nodes were not necessarily aware of each other, so that the code that handled waiting for schema distribution to complete did not work as designed. To fix this issue, MySQL Cluster now treats the ndb_schema table as a special case and enforces full binary logging at all times for this table, independent of any settings for mysqld binary logging options. (Bug #22174287, Bug #79188)
* Using ndb_mgm STOP -f to force a node shutdown even when it triggered a complete shutdown of the cluster, it was possible to lose data when a sufficient number of nodes were shut down, triggering a cluster shutodwn, and the timing was such that SUMA handovers had been made to nodes already in the process of shutting down. (Bug #17772138)
* The internal NdbEventBuffer::set_total_buckets() method calculated the number of remaining buckets incorrectly. This caused any incomplete epoch to be prematurely completed when the SUB_START_CONF signal arrived out of order. Any events belonging to this epoch arriving later were then ignored, and so effectively lost, which resulted in schema changes not being distributed correctly among SQL nodes. (Bug #79635, Bug #22363510)
* Schema events were appended to the binary log out of order relative to non-schema events. This was caused by the fact that the binlog injector did not properly handle the case where schema events and non-schema events were from different epochs. This fix modifies the handling of events from the two schema and non-schema event streams such that events are now always handled one epoch at a time, starting with events from the oldest available epoch, without regard to the event stream in which they occur. (Bug #79077, Bug #22135584, Bug #20456664)
* NDB failed during a node restart due to the status of the current local checkpoint being set but not as active, even though it could have other states under such conditions. (Bug #78780, Bug #21973758)
* Cluster Replication: While the binary log injector thread was handling failure events, it was possible for all NDB tables to be left indefinitely in read-only mode. This was due to a race condition between the binlog injector thread and the utility thread handling events on the ndb_schema table, and to the fact that, when handling failure events, the binlog injector thread places all NDB tables in read-only mode until all such events are handled and the thread restarts itself. When the binlog inject thread receives a group of one or more failure events, it drops all other existing event operations and expects no more events from the utility thread until it has handled all of the failure events and then restarted itself. However, it was possible for the utility thread to continue attempting binary log setup while the injector thread was handling failures and thus attempting to create the schema distribution tables as well as event subscriptions on these tables. If the creation of these tables and event subscriptions occurred during this time, the binlog injector thread's expectation that there were no further event operations was never met; thus, the injector thread never restarted, and NDB tables remained in read-only as described previously. To fix this problem, the Ndb object that handles schema events is now definitely dropped once the ndb_schema table drop event is handled, so that the utility thread cannot create any new events until after the injector thread has restarted, at which time, a new Ndb object for handling schema events is created. (Bug #17674771, Bug #19537961, Bug #22204186, Bug #22361695)
* Cluster API: The binlog injector did not work correctly with TE_INCONSISTENT event type handling by Ndb::nextEvent(). (Bug #22135541) References: See also Bug #20646496.
* Cluster API: Ndb::pollEvents() and pollEvents2() were slow to receive events, being dependent on other client threads or blocks to perform polling of transporters on their behalf. This fix allows a client thread to perform its own transporter polling when it has to wait in either of these methods. Introduction of transporter polling also revealed a problem with missing mutex protection in the ndbcluster_binlog handler, which has been added as part of this fix. (Bug #20957068, Bug #22224571, Bug #79311)
-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql
This site manages and broadcasts several email lists pertaining to Lasso Programming and technologies related and used by Lasso developers. Sign up today!