MySQL Cluster 7.4.19 has been released
Dear MySQL Users,
MySQL Cluster is the distributed, shared-nothing variant of MySQL.
This storage engine provides:
- In-Memory storage - Real-time performance
- Transparent Auto-Sharding - Read & write scalability
- Active-Active/Multi-Master geographic replication
- 99.999% High Availability with no single point of failure
and on-line maintenance
- NoSQL and SQL APIs (including C++, Java, http, Memcached
MySQL Cluster 7.4 makes significant advances in performance;
operational efficiency (such as enhanced reporting and faster restarts
and upgrades) and conflict detection and resolution for active-active
replication between MySQL Clusters.
MySQL Cluster 7.4.19, has been released and can be downloaded from
where you will also find Quick Start guides to help you get your
first MySQL Cluster database up and running.
The release notes are available from
MySQL Cluster enables users to meet the database challenges of next
generation web, cloud, and communications services with uncompromising
scalability, uptime and agility.
More details can be found at
Changes in MySQL NDB Cluster 7.4.19 (5.6.39-ndb-7.4.19)
(2018-01-23, General Availability)
MySQL NDB Cluster 7.4.19 is a new release of MySQL NDB
Cluster 7.4, based on MySQL Server 5.6 and including features
in version 7.4 of the NDB storage engine, as well as fixing
recently discovered bugs in previous NDB Cluster releases.
NDB 7.4.19 replaces the NDB 7.4.18 release, and is the
successor to NDB 7.4.17. Users of NDB 7.4.17 and previous NDB
7.4 releases should upgrade directly to MySQL NDB Cluster
7.4.19 or newer.
Obtaining MySQL NDB Cluster 7.4. MySQL NDB Cluster 7.4
source code and binaries can be obtained from
For an overview of changes made in MySQL NDB Cluster 7.4, see
What is New in NDB Cluster 7.4
This release also incorporates all bug fixes and changes made
in previous NDB Cluster releases (including the NDB 7.4.18
release which this release replaces), as well as all bug
fixes and feature changes which were added in mainline MySQL
5.6 through MySQL 5.6.39 (see Changes in MySQL 5.6.39
(2018-01-15, General Availability)
* NDB Replication: On an SQL node not being used for a
replication channel with sql_log_bin=0 it was possible
after creating and populating an NDB table for a table
map event to be written to the binary log for the created
table with no corresponding row events. This led to
problems when this log was later used by a slave cluster
replicating from the mysqld where this table was created.
Fixed this by adding support for maintaining a cumulative
any_value bitmap for global checkpoint event operations
that represents bits set consistently for all rows of a
specific table in a given epoch, and by adding a check to
determine whether all operations (rows) for a specific
table are all marked as NOLOGGING, to prevent the
addition of this table to the Table_map held by the
As part of this fix, the NDB API adds a new
getNextEventOpInEpoch3() method which provides
information about any AnyValue received by making it
possible to retrieve the cumulative any_value bitmap.
* A query against the INFORMATION_SCHEMA.FILES table
returned no results when it included an ORDER BY clause.
* During a restart, DBLQH loads redo log part metadata for
each redo log part it manages, from one or more redo log
files. Since each file has a limited capacity for
metadata, the number of files which must be consulted
depends on the size of the redo log part. These files are
opened, read, and closed sequentially, but the closing of
one file occurs concurrently with the opening of the
In cases where closing of the file was slow, it was
possible for more than 4 files per redo log part to be
open concurrently; since these files were opened using
the OM_WRITE_BUFFER option, more than 4 chunks of write
buffer were allocated per part in such cases. The write
buffer pool is not unlimited; if all redo log parts were
in a similar state, the pool was exhausted, causing the
data node to shut down.
This issue is resolved by avoiding the use of
OM_WRITE_BUFFER during metadata reload, so that any
transient opening of more than 4 redo log files per log
file part no longer leads to failure of the data node.
* When the duplicate weedout algorithm was used for
evaluating a semi-join, the result had missing rows.
(Bug #88117, Bug #26984919)
References: See also: Bug #87992, Bug #26926666.
* When representing a materialized semi-join in the query
plan, the MySQL Optimizer inserted extra QEP_TAB and
JOIN_TAB objects to represent access to the materialized
subquery result. The join pushdown analyzer did not
properly set up its internal data structures for these,
leaving them uninitialized instead. This meant that later
usage of any item objects referencing the materialized
semi-join accessed an initialized tableno column when
accessing a 64-bit tableno bitmask, possibly referring to
a point beyond its end, leading to an unplanned shutdown
of the SQL node. (Bug #87971, Bug #26919289)
* The NDBFS block's OM_SYNC flag is intended to make sure
that all FSWRITEREQ signals used for a given file are
synchronized, but was ignored by platforms that do not
support O_SYNC, meaning that this feature did not behave
properly on those platforms. Now the synchronization flag
is used on those platforms that do not support O_SYNC.
(Bug #76975, Bug #21049554)
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql