emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Error message

  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2069 of /homepages/18/d193362201/htdocs/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2079 of /homepages/18/d193362201/htdocs/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2069 of /homepages/18/d193362201/htdocs/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2079 of /homepages/18/d193362201/htdocs/includes/common.inc).

Durability debt

Planet MySQL - Wed, 09/19/2018 - 15:01
I define durability debt to be the amount of work that can be done to persist changes that have been applied to a database. Dirty pages must be written back for a b-tree. Compaction must be done for an LSM. Durability debt has IO and CPU components. The common IO overhead is from writing something back to the database. The common CPU overhead is from computing a checksum and optionally from compressing data.

From an incremental perspective (pending work per modified row) an LSM usually has less IO and more CPU durability debt than a B-Tree. From an absolute perspective the maximum durability debt can be much larger for an LSM than a B-Tree which is one reason why tuning can be more challenging for an LSM than a B-Tree.

In this post by LSM I mean LSM with leveled compaction.

B-Tree

The maximum durability debt for a B-Tree is limited by the size of the buffer pool. If the buffer pool has N pages then there will be at most N dirty pages to write back. If the buffer pool is 100G then there will be at most 100G to write back. The IO is more random or less random depending on whether the B-Tree is update-in-place, copy-on-write random or copy-on-write sequential. I prefer to describe this as small writes (page at a time) or large writes (many pages grouped into a larger block) rather than random or sequential. InnoDB uses small writes and WiredTiger uses larger writes. The distinction between small writes and large writes is more important with disks than with SSD.

There is a small CPU overhead from computing the per-page checksum prior to write back. There can be a larger CPU overhead from compressing the page. Compression isn't popular with InnoDB but is popular with WiredTiger.

There can be an additional IO overhead when torn-write protection is enabled as provided by the InnoDB double write buffer.

LSM

The durability debt for an LSM is the work required to compact all data into the max level (Lmax). A byte in the write buffer causes more debt than a byte in the L1 because more work is needed to move the byte from the write buffer to Lmax than from L1 to Lmax.

The maximum durability debt for an LSM is limited by the size of the storage device. Users can configure RocksDB such that the level 0 (L0) is huge. Assume that the database needs 1T of storage were it compacted into one sorted run and the write-amplification to move data from the L0 to the max level (Lmax) is 30. Then the maximum durability debt is 30 * sizeof(L0). The L0 is usually configured to be <= 1G in which case the durability debt from the L0 is <= 30G. But were the L0 configured to be <= 1T then the debt from it could grow to 30T.

I use the notion of per-level write-amp to explain durability debt in an LSM. Per-level write-amp is defined in the next section. Per-level write-amp is a proxy for all of the work done by compaction, not just the data to be written. When the per-level write-amp is X then for compaction from Ln to Ln+1 for every key-value pair from Ln there are ~X key-value pairs from Ln+1 for which work is done including:
  • Read from Ln+1. If Ln is a small level then the data is likely to be in the LSM block cache or OS page cache. Otherwise it is read from storage. Some reads will be cached, all writes go to storage. So the write rate to storage is > the read rate from storage.
  • The key-value pairs are decompressed if the level is compressed for each block not in the LSM block cache.
  • The key-value pairs from Ln+1 are merged with Ln. Note that this is a merge, not a merge sort because the inputs are ordered. The number of comparisons might be less than you expect because one iterator is ~X times larger than the other and there are optimizations for that.
The output from the merge is then compressed and written back to Ln+1. Some of the work above (reads, decompression) are also done for Ln but most of the work comes from Ln+1 because it is many times larger than Ln. I stated above that an LSM usually has more IO and less CPU durability debt per modified row. The extra CPU overheads come from decompression and the merge. I am not sure whether to count the compression overhead as extra.

Assuming the per-level growth factor is 10 and f is 0.7 (see below) then the per-level write-amp is 7 for L1 and larger levels. If sizeof(L1) == sizeof(L0) then the per-level write-amp is 2 for the L0. And the per-level write-amp is always 1 for the write buffer.

From this we can estimate the pending write-amp for data at any level in the LSM tree.
  1. Key-value pairs in the write buffer have the most pending write-amp. Key-value pairs in the max level (L5 in this case) have none. Key-value pairs in the write buffer are further from the max level. 
  2. Starting with the L2 there is more durability debt from a full Ln+1 than a full Ln -- while there is more pending write-amp for Ln, there is more data in Ln+1.
  3. Were I given the choice of L1, L2, L3 and L4 when first placing a write in the LSM tree then I would choose L4 as that has the least pending write-amp.
  4. Were I to choose to make one level 10% larger then I prefer to do that for a smaller level given the values in the rel size X pend column.

legend: w-amp per-lvl   : per-level write-amp w-amp pend      : write-amp to move byte to Lmax from this level rel size        : size of level relative to write buffer rel size X pend : write-amp to move all data from that level to Lmax
        w-amp   w-amp   rel     rel size  level   per-lvl pend    size    X pend -----   ------- -----   -----   -------- wbuf    1       31          1      31       L0      2       30          4     120      L1      7       28          4     112      L2      7       21         40     840      L3      7       14        400    5600     L4      7       7        4000   28000    L5      0       0       40000       0  
Per-level write-amp in an LSM

The per-level write-amplification is the work required to move data between adjacent levels. The per-level write-amp for the write buffer is 1 because a write buffer flush creates a new SST in L0 without reading/re-writing an SST already in L0.

I assume that any key in Ln is already in Ln+1 so that merging Ln into Ln+1 does not make Ln+1 larger. This isn't true in real life, but this is a model.

The per-level write-amp for Ln is approximately sizeof(Ln+1) / sizeof(Ln). For n=0 this is 2 with a typical RocksDB configuration. For n>0 this is the per-level growth factor and the default is 10 in RocksDB. Assume that the per-level growth factor is equal to X, in reality the per-level write-amp is f*X rather than X where f ~= 0.7. See this excellent paper or examine the compaction IO stats from a production RocksDB instance. Too many excellent conference papers assume it is X rather than f*X in practice.

The per-level write-amp for Lmax is 0 because compaction stops at Lmax.
Categories: Web Technologies

sysbench for MySQL 8.0

Planet MySQL - Wed, 09/19/2018 - 05:00

Alexey made this amazing tool that the majority of MySQL DBAs are using, but if you use sysbench provided with your GNU/Linux distribution or its repository on packagecloud.io you won’t be able to use it with the new default authentication plugin in MySQL 8.0 (caching_sha2_password).

This is because most of the sysbench binaries are compiled with the MySQL 5.7 client library or MariaDB ones. There is an issue on github where Alexey explains this.

So if you want to use sysbench with MySQL 8.0 and avoid the message below,

error 2059: Authentication plugin 'caching_sha2_password' cannot be loaded:
/usr/lib64/mysql/plugin/caching_sha2_password.so: cannot open shared object file:
No such file or directory

you have 3 options:

  1. modify the use to use mysql_native_password as authentication method
  2. compile sysbench linking it to mysql-community-libs-8.0.x (/usr/lib64/mysql/libmysqlclient.so.21)
  3. use the rpm for RHEL7/CentOS7/OL7 available on this post:

sysbench-1.0.15-1.el7.centos.x86_64Download

Categories: Web Technologies

How to Monitor Multiple MySQL Instances Running on the Same Machine - ClusterControl Tips & Tricks

Planet MySQL - Wed, 09/19/2018 - 04:17

Requires ClusterControl 1.6 or later. Applies to MySQL based instances/clusters.

On some occasions, you might want to run multiple instances of MySQL on a single machine. You might want to give different users access to their own MySQL servers that they manage themselves, or you might want to test a new MySQL release while keeping an existing production setup undisturbed.

It is possible to use a different MySQL server binary per instance, or use the same binary for multiple instances (or a combination of the two approaches). For example, you might run a server from MySQL 5.6 and one from MySQL 5.7, to see how the different versions handle a certain workload. Or you might run multiple instances of the latest MySQL version, each managing a different set of databases.

Related resources  Become a ClusterControl DBA: Performance and Health Monitoring  How to Automate Database Workload Analysis with ClusterControl Performance Advisors  How ClusterControl Monitors your Database Servers and Clusters Agentlessly  Database Security Monitoring for MySQL and MariaDB  ClusterControl CLI

Whether or not you use distinct server binaries, each instance that you run must be configured with unique values for several operating parameters. This eliminates the potential for conflict between instances. You can use MySQL Sandbox to create multiple MySQL instances. Or you can use mysqld_multi available in MySQL to start or stop any number of separate mysqld processes running on different TCP/IP ports and UNIX sockets.

In this blog post, we’ll show you how to configure ClusterControl to monitor multiple MySQL instances running on one host.

ClusterControl Limitation

At the time of writing, ClusterControl does not support monitoring of multiple instances on one host per cluster/server group. It assumes the following best practices:

  • Only one MySQL instance per host (physical server or virtual machine).
  • MySQL data redundancy should be configured on N+1 server.
  • All MySQL instances are running with uniform configuration across the cluster/server group, e.g., listening port, error log, datadir, basedir, socket are identical.

With regards to the points mentioned above, ClusterControl assumes that in a cluster/server group:

  • MySQL instances are configured uniformly across a cluster; same port, the same location of logs, base/data directory and other critical configurations.
  • It monitors, manages and deploys only one MySQL instance per host.
  • MySQL client must be installed on the host and available on the executable path for the corresponding OS user.
  • The MySQL is bound to an IP address reachable by ClusterControl node.
  • It keeps monitoring the host statistics e.g CPU/RAM/disk/network for each MySQL instance individually. In an environment with multiple instances per host, you should expect redundant host statistics since it monitors the same host multiple times.

With the above assumptions, the following ClusterControl features do not work for a host with multiple instances:

Backup - Percona Xtrabackup does not support multiple instances per host and mysqldump executed by ClusterControl only connects to the default socket.

Process management - ClusterControl uses the standard ‘pgrep -f mysqld_safe’ to check if MySQL is running on that host. With multiple MySQL instances, this is a false positive approach. As such, automatic recovery for node/cluster won’t work.

Configuration management - ClusterControl provisions the standard MySQL configuration directory. It usually resides under /etc/ and /etc/mysql.

Workaround

Monitoring multiple MySQL instances on a machine is still possible with ClusterControl with a simple workaround. Each MySQL instance must be treated as a single entity per server group.

In this example, we have 3 MySQL instances on a single host created with MySQL Sandbox:

ClusterControl monitoring multiple instances on same host

We created our MySQL instances using the following commands:

$ su - sandbox $ make_multiple_sandbox mysql-5.7.23-linux-glibc2.12-x86_64.tar.gz

By default, MySQL Sandbox creates mysql instances that listen to 127.0.0.1. It is necessary to configure each node appropriately to make them listen to all available IP addresses. Here is the summary of our MySQL instances in the host:

[sandbox@master multi_msb_mysql-5_7_23]$ cat default_connection.json { "node1": { "host": "master", "port": "15024", "socket": "/tmp/mysql_sandbox15024.sock", "username": "msandbox@127.%", "password": "msandbox" } , "node2": { "host": "master", "port": "15025", "socket": "/tmp/mysql_sandbox15025.sock", "username": "msandbox@127.%", "password": "msandbox" } , "node3": { "host": "master", "port": "15026", "socket": "/tmp/mysql_sandbox15026.sock", "username": "msandbox@127.%", "password": "msandbox" } }

Next step is to modify the configuration of the newly created instances. Go to my.cnf for each of them and hash bind_address variable:

[sandbox@master multi_msb_mysql-5_7_23]$ ps -ef | grep mysqld_safe sandbox 13086 1 0 08:58 pts/0 00:00:00 /bin/sh bin/mysqld_safe --defaults-file=/home/sandbox/sandboxes/multi_msb_mysql-5_7_23/node1/my.sandbox.cnf sandbox 13805 1 0 08:58 pts/0 00:00:00 /bin/sh bin/mysqld_safe --defaults-file=/home/sandbox/sandboxes/multi_msb_mysql-5_7_23/node2/my.sandbox.cnf sandbox 14065 1 0 08:58 pts/0 00:00:00 /bin/sh bin/mysqld_safe --defaults-file=/home/sandbox/sandboxes/multi_msb_mysql-5_7_23/node3/my.sandbox.cnf [sandbox@master multi_msb_mysql-5_7_23]$ vi my.cnf #bind_address = 127.0.0.1

Then install mysql on your master node and restart all instances using restart_all script.

[sandbox@master multi_msb_mysql-5_7_23]$ yum install mysql [sandbox@master multi_msb_mysql-5_7_23]$ ./restart_all # executing "stop" on /home/sandbox/sandboxes/multi_msb_mysql-5_7_23 executing "stop" on node 1 executing "stop" on node 2 executing "stop" on node 3 # executing "start" on /home/sandbox/sandboxes/multi_msb_mysql-5_7_23 executing "start" on node 1 . sandbox server started executing "start" on node 2 . sandbox server started executing "start" on node 3 . sandbox server started

From ClusterControl, we need to perform ‘Import’ for each instance as we need to isolate them in a different group to make it work.

ClusterControl import existing server

For node1, enter the following information in ClusterControl > Import:

ClusterControl import existing server

Make sure to put proper ports (different for different instances) and host (same for all instances).

You can monitor the progress by clicking on the Activity/Jobs icon in the top menu.

ClusterControl import existing server details

You will see node1 in the UI once ClusterControl finishes the job. Repeat the same steps to add another two nodes with port 15025 and 15026. You should see something like the below once they are added:

ClusterControl Dashboard

There you go. We just added our existing MySQL instances into ClusterControl for monitoring. Happy monitoring!

PS.: To get started with ClusterControl, click here!

Tags:  MySQL MariaDB clustercontrol monitoring multiple instances
Categories: Web Technologies

Tutorial Schedule for Percona Live Europe 2018 Is Live

Planet MySQL - Wed, 09/19/2018 - 00:53

Percona has revealed the line-up of in-depth tutorials for the Percona Live Europe 2018 Open Source Database Conference, taking place November 5–7, 2018 at the Radisson Blu Hotel in Frankfurt, Germany. Secure your spot now with Advanced Registration prices. Be sure to buy your tickets soon as tickets prices will only head up, not down! Sponsorship opportunities for the conference are still available.

Percona Live Europe 2018 Open Source Database Conference is the premier open source database event. Our theme this year is “Connect. Accelerate. Innovate.”  Percona Live is the place to learn about how open source database technology can power your applications, improve your websites and solve your critical database issues.

Monday, November 5: Tutorial Day

Tutorials take place throughout the day on Monday, November 5. Tutorials are three hours and provide practical, in-depth knowledge exchange on critical open source database issues. The line up includes:

  • Query Optimization with MySQL 8.0 and MariaDB 10.3: The Basics with Jaime Crespo – Wikimedia Foundation
  • Hands on ProxySQL with René Cannaò – ProxySQL
  • ElasticSearch 101 with Antonios Giannopoulos – ObjectRocket
  • MySQL Performance Schema in Action: the Complete Tutorial with Sveta Smirnova and Alexander Rubin – Percona
  • MySQL InnoDB Cluster in a Nutshell : The Saga Continues with 8.0 with Frédéric Descamps – Oracle
  • Introduction to PostgreSQL for MySQL and Oracle DBAs with Avinash Vallarapu – Percona
  • InnoDB Architecture and Optimization with Peter Zaitsev – Percona
  • MongoDB: Replica Sets and Sharded Clusters with Adamo Tonete – Percona
  • High Availability PostgreSQL and Kubernetes with Google Cloud presented by Alexis Guajardo – Google
  • Open Source Database Performance Optimization and Monitoring with PMM with Michael Coburn, Vinicius Grippa, and Avinash Vallarapu – Percona
  • Percona XtraDB Cluster Tutorial presented by Tibor Köröcz – Percona
  • Mastering PostgreSQL Administration with Bruce Momjian – EnterpriseDB
and a sneak peak at some of the sessions

Of course, we have a stellar line up of talks, too! Here’s a tantalising glimpse of just some of the talks you could MISS if you don’t head to Frankfurt in November

Tuesday 6th November
  • Percona Server 8.0 – Laurynas Biveinis – Percona
  • MySQL 8.0 Performance: Scalability & Benchmarks – Dimitri Kravtchuk – Oracle
  • MySQL Group Replication : the magic explained – Frédéric Descamps – Oracle
  • Explaining the Postgres Query Optimizer – Bruce Momjian – EnterpriseDB
  • TLS for MySQL at large scale – Jaime Crespo – Wikimedia Foundation
  • BlaBlaCar – 100% Containers Powered Carpooling – Maxime Fouilleul – BlaBlaCar
  • A Year in Google Cloud – Carmen Mason, Alan Mason – VitalSource Technologies
  • Demystifying MySQL Replication Crash Safety – Jean-François Gagné
  • MongoDB Shard 101 – Adamo Tonete – Percona
  • Highway to Hell or Stairway to Cloud? – Alexander Kukushkin – Zalando
  • MongoDB administration cool tips – Gabriel Ciciliani – Pythian
Wednesday 7th November
  • PostgreSQL Enterprise Features – Michael Banck – credativ GmbH
  • Open Source Databases and Non-Volatile Memory – Frank Ober – Intel Memory Group
  • MariaDB 10.3 Optimizer and beyond – Vicentiu Ciorbaru – MariaDB Foundation
  • MariaDB system-versioned tables – Federico Razzoli – PayProp

A shout out to our fantastic Conference Committee who have been working hard to review the tutorial and talk submissions: we had over 200! Thank you!

The Radisson Blu Hotel, Frankfurt

Percona Live 2018 Open Source Database Conference will be held at the Radisson Blu Hotel, Frankfurt Franklinstraße 65, 60486 Frankfurt am Main, Germany

The Radisson Blu enjoys an enviable location in the Bockenheim District, just off the A66 motorway – only one kilometer from Messe Frankfurt, one of the world’s largest exhibition complexes. They’re also just 10 minutes from the city center, and Frankfurt International Airport (FRA) is a quick 15-minute drive away.

Book your hotel using Percona’s special room block rate, available only until September 20.

Sponsorships

Sponsorship opportunities for Percona Live Europe 2018 Open Source Database Conference are available and offer the opportunity to interact with the DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend the event. Contact events@percona.com for sponsorship details.

The post Tutorial Schedule for Percona Live Europe 2018 Is Live appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Pages