emGee Software Solutions Custom Database Applications

Share this

Planet MySQL

Planet MySQL - https://planet.mysql.com
Updated: 36 min 47 sec ago

What's New in MariaDB Server 10.3

3 hours 35 min ago
What's New in MariaDB Server 10.3 maxmether Fri, 05/25/2018 - 18:52

We are happy to announce the general availability (GA) of MariaDB Server 10.3! This release is a big milestone for the development of MariaDB Server and is the result of a huge effort by the development team and contributors – thanks to everyone involved! With our previous major release of MariaDB Server 10.2 last year, we started a journey of adding more enterprise-grade features to better close the gap with proprietary databases. With MariaDB Server 10.3 we take a huge leap on that journey by being the first enterprise open source database to add features like temporal data processing (through system versioning) as well as compatibility with Oracle sequences and Oracle PL/SQL. At the same time, we want to stay true to our open source and innovative roots by adding support for new storage engines to be able to more easily adapt to different workloads and different hardware available to users. This path allows us to adapt quickly to an ever-changing landscape where new innovations are being created at a constantly accelerated pace. This is our greatest release yet and, with this release, we want to put our stake in the Enterprise database category.

The key enhancements of MariaDB Server 10.3 can be put in the following categories:

  • Temporal data processing (system-versioned tables)

  • Oracle compatibility features

  • Purpose-built storage engines

 

Temporal Data Processing

Temporal data processing through our system versioning feature is one of the more exciting additions in the MariaDB Server 10.3 release. With system versioning, the database will keep track of all changes made to every row in the table. The old versions of the rows are not visible through normal query syntax, but by using a special syntax you can access all of the old versions of the row. This capability lends itself to a large number of use cases, anything from auditing and forensics (finding the exact point-in-time result set from a suspicious query executed some time ago) to things like analyzing changes in your data, comparing customer preferences year to year and a multitude of other possibilities. This feature can be turned on per table and the history can also be deleted periodically so that your table doesn’t grow indefinitely. The use cases are both exciting and endless! For more information on system versioning read our manual or this blog on automatic data versioning.

 

Oracle Compatibility

As the demand for MariaDB Server has increased in larger enterprises we have also seen a need for features that are readily available in proprietary databases. In order for MariaDB to be easier to use for DBAs and skilled database engineers from other products, we wanted to add as much compatibility as possible.

In MariaDB Server 10.3, we added a new stored routine syntax in addition to the already existing MariaDB SQL/PSM syntax. We now support MariaDB SQL/PL which is a syntax designed to be compatible with Oracle PL/SQL. This way, migrating existing applications is a lot easier and existing skills can be used without complex retraining. In the process we also added several new constructs to our stored procedure support like new ROW data types.

The new syntax isn’t the only new compatibility feature, we also added sequences in order to have a more flexible way of creating unique primary keys than the already existing auto_increment feature. This feature is fully compatible with Oracle sequences. Learn more about how to use sequences in this blog post. Together with features added previously (like window functions, common table expressions (CTEs), etc.) we now have a deep set of enterprise-grade features that can tackle any type of application need.

 

Purpose-Built Storage Engines

At MariaDB, we believe in using the right tool for the right trade. However, we don’t feel that you need to change everything in order to achieve that. We have a unique architecture with pluggable storage engines that allows the user to adapt the database to the use case and workload without changing the main characteristics and features. We believe that this flexibility serves the best interest of the user and we will work on further advancing this with future versions of MariaDB. This architecture will enable both the community and our team to innovate further by adding storage engines designed for new hardware and new use cases. In MariaDB Server 10.3, we introduce two new storage engines that are declared stable, MyRocks and Spider.

MyRocks comes from a collaboration with Facebook where the storage engine is built on top of RocksDB – an open source project mainly maintained by Facebook. The MyRocks storage engine is built using a log-structured merge tree (LSM tree) architecture and is well adapted to high write workloads. MyRocks also has a very high compression ratio and is built to optimize the lifetime of SSD disks.

Spider is a storage engine designed for extreme scale. The Spider storage engine allows you to shard a specific table across multiple nodes. It uses the partitioning protocol to define how the table should be split up and each individual shard will then reside on a remote MariaDB Server that will only handle queries for that particular shard. With Spider you get almost linear scaling for INSERTS and key lookup read queries.

  And there’s more ...

In addition to this, we have added a multitude of features to help speed up schema operations (like instant ADD COLUMN) and other optimizations and compatibility features. The ADD COLUMN feature is another example of our collaboration with customers and partners including Alibaba, Tencent and ServiceNow, and is just the beginning of making heavy DDL operations more usable.

Want all the details? Get a full list of features in MariaDB Server 10.3.

Get MariaDB Server 10.3 as part of the MariaDB TX 3.0 download – now available.

Login or Register to post comments

Categories: Web Technologies

The Perfect Server - Ubuntu 18.04 (Bionic Beaver) with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig 3.1

9 hours 20 min ago
This tutorial shows how to install an Ubuntu 18.04 LTS (Bionic Beaver) server with Apache, BIND, Dovecot and ISPConfig 3.1. ISPConfig is a web hosting control panel that allows you to configure the following services through a web browser: Apache or nginx web server, Postfix mail server, Courier or Dovecot IMAP/POP3 server, MySQL, BIND or MyDNS nameserver, PureFTPd, SpamAssassin, ClamAV, and many more. This setup covers the installation of Apache (instead of nginx), BIND (instead of MyDNS), and Dovecot (instead of Courier).
Categories: Web Technologies

Recommended fix for MySQL Checker Query Script for Tungsten Clustering

11 hours 39 min ago

We have identified an issue in a script that is executed as part of the Tungsten Clustering solution. The script itself executes a small query against the internal tables used by the replicator in order to help identify the current status and latency of the replication process. We believe this script may occasionally fail under some very specific conditions, which would not normally be an issue, but the knock on effect is to create Out of Memory errors and instability in the way identifying the current replication state is handled.

The issue has been fixed in the upcoming 6.0.1 and 5.3.2 releases, but we have identified that customers on versions of the Tungsten Clustering solution from v4.0 upwards would benefit from using an updated version of the script.

If you are using Tungsten Clustering 4.x or higher, it is recommended that you apply this patch to improve the stability of your clustering installation. 

The script can be updated in a number of different ways, including using a script we have written that will update the in place versions of the file. Due to the nature of the script and the installation process, we recommend that even if you manually update or use the update script to fix existing installations, you should also update the version in the package or staging directories so that updates and new installations do not wipe out the changes.

Note: Updating the live version of the file in an installed directory saves you having to reinstall or restart any of the clustering components. When the file has been updated, the manager will automatically reload and use the new version.

Updating installed directories using a script

The easiest way to apply the fix is to use our script, which simply finds all files named *mysql_checker_query.sql* and swaps out just one line.
Since the standard location for staging extraction is /opt/continuent/software, we expect to be able to patch the files found in the staging directories along with those found in the installed directory.

To apply the patch to both the installed files and staging templates by using our update script:

  1. Download the script file we have written from fix_mysql_checker_query.sh
  2. Make the script executable:
chmod +x fix_mysql_checker_query.sh
  1. Run the script, by default it searches for files in /opt/continuent, and you may also specify the directory. For example:
./fix_mysql_checker_query.sh

or

./fix_mysql_checker_query.sh /opt/tungsten

The script will update both the installed files and any staging templates located under the target directory.

Updating via staging directory (requires tpm update)

To update a version of the software extracted from the distribution tarball:

  1. Download the mysql_checker_query.sql.tpl file.
  2. Change to the extracted software directory
  3. Copy the downloaded file into ./tungsten-manager/samples/conf/mysql_checker_query.sql.tpl

$ cp ~/mysql_checker_query.sql.tpl ./tungsten-manager/samples/conf/mysql_checker_query.sql.tpl

If you want to update your software now, bearing in mind that this will restart the managers and connectors, perform an update through tpm to update the distribution:

./tools/tpm update --replace-release Updating an installed directory manually

To update the script by hand, you will need to update a version of the script for every service within your installation.

  1. Download the mysql_checker_query.sql.tpl file.
  2. Copy the file into a new file named mysql_checker_query.sql
  3. Edit the file and change each instance of the string @{MGR_REPL_SCHEMA} to the name of the tracking schema. For example, if you service is called ‘east’ then the tracking schema is ‘tungsten_east’
  4. Copy the temporary file into the installed configuration directory. For example, /opt/continuent/tungsten/tungsten-manager/conf/mysql_checker_query.sql
Categories: Web Technologies

Ensure better defaults with InnoDB Dedicated server.

13 hours 12 min ago

We have seen with most of the consulting projects where the customer might be having a dedicated DB (MySQL) server ,but running with a default configuration, without any optimisation for underlying hardware, “An idle hardware is similar to idle money will give you no returns”.

Well again if you are from a non-DBA background and you have chosen InnoDB as your engine of choice. The next question will be, what are the major variable that needs to be tuned for the available hardware? here is the answer for you

In this post, We are going to detail about the variable innodb_dedicated_server in MySQL 8.0.11. This variable solves our above query and of course MySQL 8.0 comes with best default values for production use cases.

If innodb_dedicated_server is enabled in my.cnf, MySQL will tune the below variables as per the memory in the server.

Overview:

innodb_buffer_pool_size

Server Memory Buffer Pool Size < 1 G 128 MiB (the innodb_buffer_pool_size default) <= 4 G Detected server memory * 0.5 > 4 G Detected server memory * 0.75

Innodb_buffer_pool_size can be set up to 80% of physical RAM in offline (at startup).  From MySQL 5.7 we can increase the value in the online method without need of restarting the server.

innodb_log_file_size

Server Memory Log File Size < 1 GB 48 MiB (the innodb_log_file_size default) <= 4 GB 128 MiB <= 8 GB 512 MiB <= 16 GB 1024 MiB > 16 GB 2048 MiB

Innodb_log_file_size is primarily used for recovery of DB server, in case of a crash. The recommended value for redo log sizing is that log should have the ability to hold at least last one-hour of the transaction for better stability and durability. Having small size can slow down the write performance. ie., commit latency, by waiting for transaction logs to get freed up.

innodb_flush_method

The flush method is set to O_DIRECT_NO_FSYNC when innodb_dedicated_server is enabled. If the O_DIRECT_NO_FSYNC setting is not available for the server, the default innodb_flush_method setting is used with respect to the architecture.

Testing innodb_dedicated_server:

Configuration:

  • RAM – 4G
  • CPU –  2
  • DISK – 10G SSD

Variable disabled (default):

Memory:

root@localhost :(none) > \! free -m total used free shared buff/cache available Mem: 3849 630 1706 8 512 1009 Swap: 0 0 0

my.cnf values:

root@localhost :(none) > \! cat /etc/my.cnf | grep -i innodb innodb_buffer_pool_size = 1G innodb_log_file_size = 96M innodb_flush_log_trx_at_commit = 2 innodb_flush_method = O_DIRECT

Global values of the variables without innodb_dedicated_server variable.

root@localhost :(none) > select @@innodb_dedicated_server `Innodb Dedicated Server`, concat(round((@@innodb_buffer_pool_size / 1024 / 1024 / 1024 ),2),' GB') `Innodb Buffer Pool Size`,concat(round((@@innodb_log_file_size / 1024 / 1024),2),' MB') `Innodb Log file size`,@@innodb_flush_method `Innodb Flush Method`\G *************************** 1. row *************************** Innodb Dedicated Server: 0 Innodb Buffer Pool Size: 1.00 GB Innodb Log file size: 96.00 MB Innodb Flush Method: O_DIRECT 1 row in set (0.00 sec)

There is no change in variables because innodb_dedicated_server is disabled and the other values are in place as per the cnf values.

Variable enabled:

my.cnf values:

root@localhost :(none) > \! cat /etc/my.cnf | grep -i innodb innodb_dedicated_server = 1 #innodb_buffer_pool_size = 1G #innodb_log_file_size = 96M innodb_flush_log_trx_at_commit = 2 #innodb_flush_method = O_DIRECT

The above variables will not effect by innodb_dedicated_server if they are explicitly mentioned in my.cnf. So I have commented these variables out and restarted the mysql server.

Global values of the variables with innodb_dedicated_server variable.

root@localhost :(none) > select @@innodb_dedicated_server `Innodb Dedicated Server`, concat(round((@@innodb_buffer_pool_size / 1024 / 1024 / 1024 ),2),' GB') `Innodb Buffer Pool Size`,concat(round((@@innodb_log_file_size / 1024 / 1024),2),' MB') `Innodb Log file size`,@@innodb_flush_method `Innodb Flush Method`\G *************************** 1. row *************************** Innodb Dedicated Server: 1 Innodb Buffer Pool Size: 2.00 GB Innodb Log file size: 128.00 MB Innodb Flush Method: O_DIRECT_NO_FSYNC 1 row in set (0.00 sec)

So if innodb_dedicated_server is enabled those variables will be affected as the above formula.

We will receive a warning message in MySQL error log if we specify the variable in my.cnf along with innodb_dedicated_server.

Error log:

[Warning] [MY-012358] [InnoDB] InnoDB: Option innodb_dedicated_server is ignored for innodb_buffer_pool_size because innodb_buffer_pool_size=419430400 is specified explicitly.

Key Takeaways:

  • Do not enable this variable if the server is not dedicated for MySQL.
  • Do not enable the dependent variables if innodb_dedicated_server is enabled.

This variable can ensure that you are using your hardware better but there are more that can be tuned based on the db usage patterns.

Categories: Web Technologies

Webinar Tues, 5/29: MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together

14 hours 33 min ago

Please join Percona’s Principal Architect Alex Rubin as he presents MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together to give you a highly available cluster database environment on Tuesday, May 29th at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).

Register Now

 

In this webinar, Alex will discuss how to deploy a highly available MySQL database environment on Kubernetes/Openshift using Percona XtraDB Cluster (PXC) together with MySQL Proxy to implement read/write splitting.

If you have never used Kubernetes and Openshift, or never used PXC / MySQL Proxy, Alex will do a quick introduction to these technologies. There will also be a demo where Alex sets up a PXC cluster with ProxySQL in Openshift Origin and tries to break it.

By the end of this webinar you will have a better understanding of:

  • How to deploy Percona XtraDB Cluster with ProxySQL for HA solutions
  • How to leverage Kubernetes/Openshift in your environments
  • How to troubleshoot performance issues

Register for the webinar

Alexander Rubin, Principal Consultant

Alexander joined Percona in 2013. Alexander worked with MySQL since 2000 as DBA and Application Developer. Before joining Percona he was doing MySQL consulting as a principal consultant for over 7 years (started with MySQL AB in 2006, then Sun Microsystems and then Oracle). He helped many customers design large, scalable and highly available MySQL systems and optimize MySQL performance. Alexander also helped customers design Big Data stores with Apache Hadoop and related technologies.

The post Webinar Tues, 5/29: MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Geographic Spatial Reference Systems in MySQL 8.0

17 hours 50 min ago

MySQL 8.0.11 comes with a catalog of 5108 spatial reference system (SRS) definitions, and 479 of these are geographic. We usually just refer to them by SRID, but in this blog post we’ll dive into the details and try to understand the definition itself.…

Categories: Web Technologies

Using dbdeployer to manage MySQL, Percona Server and MariaDB sandboxes

Thu, 05/24/2018 - 14:56

Some years ago, Peter Z wrote a blogpost about using MySQL Sandbox to deploy multiple server versions. Last February, Giuseppe  introduced us to its successor: dbdeployer. In this blogpost we will demonstrate how to use it. There is a lot of information in Giuseppe’s post, so head there if you want a deeper dive.

First step is to install it, which is really easy to do now since it’s developed in Go, and standalone executables are provided. You can get the latest version here.

shell> wget https://github.com/datacharmer/dbdeployer/releases/download/1.5.0/dbdeployer-1.5.0.linux.tar.gz shell> tar xzf dbdeployer-1.5.0.linux.tar.gz shell> mv dbdeployer-1.5.0.linux ~/bin/dbdeployer

If you have your ~/bin/ directory in the path, you should now be able to run dbdeployer commands. Let’s start with deploying a latest version vanilla MySQL sandbox.

In the Support Team, we extensively use MySQL Sandbox (the predecessor to dbdeployer) to easily run different flavours and versions of MySQL so that we can test with the same versions our customers present us with. We store MySQL binaries in /opt/, so we can all share them and avoid wasting disk space on duplicated binaries.

The first step to using dbdeployer is getting the binary we want to run, and then unpacking it into the binaries directory.

shell> wget https://dev.mysql.com/get/Downloads/MySQL-8.0/mysql-8.0.11-linux-glibc2.12-x86_64.tar.gz shell> dbdeployer --sandbox-binary=/opt/mysql/ unpack mysql-8.0.11-linux-glibc2.12-x86_64.tar.gz

This command will extract and move the files to the appropriate directory, which in this case is under /opt/mysql/ as overridden with the --sandbox-binary argument, so we can use them with the deploy command.

Standalone

To create a new standalone MySQL sandbox with the newly extracted binary, we can use the following command.

shell> dbdeployer --sandbox-binary=/opt/mysql/ deploy single 8.0.11 Creating directory /home/vagrant/sandboxes Database installed in $HOME/sandboxes/msb_8_0_11 run 'dbdeployer usage single' for basic instructions' .. sandbox server started

You can read the dbdeployer usage output to have even more information on how the tool works. Next, let’s connect to it.

shell> cd sandboxes/msb_8_0_11/ shell> ./use Welcome to the MySQL monitor.  Commands end with ; or g. Your MySQL connection id is 9 Server version: 8.0.11 MySQL Community Server - GPL Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql [localhost] {msandbox} ((none)) > select @@version, @@port; +-----------+--------+ | @@version | @@port | +-----------+--------+ | 8.0.11    | 8011 | +-----------+--------+ 1 row in set (0.00 sec)

And that was it! When creating the new instance, dbdeployer will try to use the same port as the version numbers concatenated. If that port is in use, it will try another one, or we can manually override it with the --port argument.

Replication

We can also easily setup a replication environment with just one command.

shell> dbdeployer --sandbox-binary=/opt/mariadb/ deploy replication 10.2.15 Installing and starting master . sandbox server started Installing and starting slave1 . sandbox server started Installing and starting slave2 . sandbox server started $HOME/sandboxes/rsandbox_10_2_15/initialize_slaves initializing slave 1 initializing slave 2 Replication directory installed in $HOME/sandboxes/rsandbox_10_2_15 run 'dbdeployer usage multiple' for basic instructions'

Again, you should run the recommended command to get more insight into what can be done. We can use the ./m script to connect to the master, and ./s1 to connect to the first slave. The ./use_all* scripts can come in handy to run commands in many servers at a time.

Multiple sandboxes

Finally, we will see how to create multiple sandboxes with the same version at the same time.

shell> dbdeployer --sandbox-binary=/opt/percona_server/ deploy multiple 5.7.21 Installing and starting node 1 . sandbox server started Installing and starting node 2 . sandbox server started Installing and starting node 3 . sandbox server started multiple directory installed in $HOME/sandboxes/multi_msb_5_7_21 run 'dbdeployer usage multiple' for basic instructions'

This could be useful for setting up environments that are not already covered by the tool, like Galera clusters or semi-sync replication. With this approach, we will at least have a base to start from, and then can use our own custom scripts. dbdeployer now has templates, which would allow extending functionality to support this, if needed. I have not yet tried to do so, but sounds like an interesting project for the future! Let me know if you would be interested in reading more about it.

The post Using dbdeployer to manage MySQL, Percona Server and MariaDB sandboxes appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Setting up PMM on Google Compute Engine in 15 minutes or less

Thu, 05/24/2018 - 12:32

In this blog post, I will show you how easy it is to set up a Percona Monitoring and Management server on Google Compute Engine from the command line.

First off you will need to have a Google account and install the Cloud SDK tool. You need to create a GCP (Google Cloud Platform) project and enable billing to proceed. This blog assumes you are able to authenticate and SSH into instances from the command line.

Here are the steps to install PMM server in Google Cloud Platform.

1) Create the Compute engine instance with the following command. The example creates an Ubuntu Xenial 16.04 LTS compute instance in the us-west1-b zone with a 100GB persistent disk. For production systems it would be best to use a 500GB disk instead (size=500GB). This should be enough for default data retention settings, although your needs may vary.

jerichorivera@percona-support:~/GCE$ gcloud compute instances create pmm-server --tags pmmserver --image-family ubuntu-1604-lts --image-project ubuntu-os-cloud --machine-type n1-standard-4 --zone us-west1-b --create-disk=size=100GB,type=pd-ssd,device-name=sdb --description "PMM Server on GCP" --metadata-from-file startup-script=deploy-pmm-xenial64.sh Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS pmm-server us-west1-b n1-standard-4 10.138.0.2 35.233.216.225 RUNNING

Notice that we’ve used

--metadata-from-file startup-script=deploy-pmm-xenial64.sh  The file has the following contents:jerichorivera@percona-support:~$ cat GCE/deploy-pmm-xenial64.sh #!/bin/bash set -v sudo apt-get update sudo apt-get upgrade -y sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update # Format the persistent disk, mount it then add to /etc/fstab sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb sudo mkdir -p /mnt/disks/pdssd sudo mount -o discard,defaults /dev/sdb /mnt/disks/pdssd/ sudo chmod a+w /mnt/disks/pdssd/ sudo cp /etc/fstab /etc/fstab.backup echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /mnt/disks/pdssd ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab # Change docker’s root directory before installing Docker sudo mkdir /etc/systemd/system/docker.service.d/ cat << EOF > /etc/systemd/system/docker.service.d/docker.root.conf [Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// -g /mnt/disks/pdssd/docker/ EOF sudo apt-get install -y docker-ce # Creates the deploy.sh script cat << EOF > /tmp/deploy.sh #!/bin/bash set -v docker pull percona/pmm-server:latest docker create -v /opt/prometheus/data -v /opt/consul-data -v /var/lib/mysql -v /var/lib/grafana --name pmm-data percona/pmm-server:latest /bin/true docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:latest EOF

This startup script will be executed right after the compute instance is created. The script will format the persistent disk and mount the file system; create a custom Docker unit file for the purpose of creating Docker’s root directory from /var/lib/docker to /mnt/disks/pdssd/docker; install the Docker package; and create the deploy.sh script.

2) Once the compute engine instance is created, SSH into the instance, check that Docker is running and the root directory pointing to the desired folder.

jerichorivera@pmm-server:~$ sudo systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/docker.service.d └─docker.root.conf Active: active (running) since Wed 2018-05-16 12:53:30 UTC; 45s ago Docs: https://docs.docker.com Main PID: 4744 (dockerd) CGroup: /system.slice/docker.service ├─4744 /usr/bin/dockerd -H fd:// -g /mnt/disks/pdssd/docker/ └─4764 docker-containerd --config /var/run/docker/containerd/containerd.toml May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391566708Z" level=warning msg="Your kernel does not support swap memory limit" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391638253Z" level=warning msg="Your kernel does not support cgroup rt period" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391680203Z" level=warning msg="Your kernel does not support cgroup rt runtime" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.392913043Z" level=info msg="Loading containers: start." May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.767048674Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.847907241Z" level=info msg="Loading containers: done." May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875129963Z" level=info msg="Docker daemon" commit=9ee9f40 graphdriver(s)=overlay2 version=18.03.1-ce May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875285809Z" level=info msg="Daemon has completed initialization" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.884566419Z" level=info msg="API listen on /var/run/docker.sock" May 16 12:53:30 pmm-server systemd[1]: Started Docker Application Container Engine.

3) Add your user to the docker group as shown below and change deploy.sh script to executable.

jerichorivera@pmm-server:~$ sudo usermod -aG docker $USER jerichorivera@pmm-server:~$ sudo chmod +x /tmp/deploy.sh

4) Log off from the instance, and then log back in and then execute the deploy.sh script.

jerichorivera@pmm-server:~$ cd /tmp/ jerichorivera@pmm-server:/tmp$ ./deploy.sh docker pull percona/pmm-server:latest latest: Pulling from percona/pmm-server 697841bfe295: Pull complete fa45d21b9629: Pull complete Digest: sha256:98d2717b4f0ae83fbca63330c39590d69a7fca7ae6788f52906253ac75db6838 Status: Downloaded newer image for percona/pmm-server:latest docker create -v /opt/prometheus/data -v /opt/consul-data -v /var/lib/mysql -v /var/lib/grafana --name pmm-data percona/pmm-server:latest /bin/true 8977102d419cf8955fd8bbd0ed2c663c75a39f9fbc635238d56b480ecca8e749 docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:latest 83c2e6db2efc752a6beeff0559b472f012062d3f163c042e5e0d41cda6481d33

5) Finally, create a firewall rule to allow HTTP port 80 to access the PMM Server. For security reasons, we recommend that you secure your PMM server by adding a password, or limit access to it with a stricter firewall rule to specify which IP addresses can access port 80.

jerichorivera@percona-support:~$ gcloud compute firewall-rules create allow-http-pmm-server --allow tcp:80 --target-tags pmmserver --description "Allow HTTP traffic to PMM Server" Creating firewall...-Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/global/firewalls/allow-http-pmm-server]. Creating firewall...done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY allow-http-pmm-server default INGRESS 1000 tcp:80 jerichorivera@percona-support:~/GCE$ gcloud compute firewall-rules list NAME NETWORK DIRECTION PRIORITY ALLOW DENY allow-http-pmm-server default INGRESS 1000 tcp:80 default-allow-icmp default INGRESS 65534 icmp default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp default-allow-rdp default INGRESS 65534 tcp:3389 default-allow-ssh default INGRESS 65534 tcp:22

At this point you should have a PMM Server in GCP running on a Compute Engine instance.

The next steps is to install pmm-client on the database hosts and add services for monitoring.

Here I’ve launched a single standalone Percona Server 5.6 on another Compute Engine instance in the same project (thematic-acumen-204008).

jerichorivera@percona-support:~/GCE$ gcloud compute instances create mysql1 --tags mysql1 --image-family centos-7 --image-project centos-cloud --machine-type n1-standard-2 --zone us-west1-b --create-disk=size=50GB,type=pd-standard,device-name=sdb --description "MySQL1 on GCP" --metadata-from-file startup-script=compute-instance-deploy.sh Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/mysql1]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS mysql1 us-west1-b n1-standard-2 10.138.0.3 35.233.187.253 RUNNING

Installed Percona Server 5.6 and pmm-client and then added services. Take note that since the PMM Server and the MySQL server is in the same project and same VPC network, we can connect directly through INTERNAL_IP 10.138.0.2, otherwise use the EXTERNAL_IP 35.223.216.225.

[root@mysql1 jerichorivera]# pmm-admin config --server 10.138.0.2 OK, PMM server is alive. PMM Server | 10.138.0.2 Client Name | mysql1 Client Address | 10.138.0.3 [root@mysql1 jerichorivera]# [root@mysql1 jerichorivera]# pmm-admin check-network PMM Network Status Server Address | 10.138.0.2 Client Address | 10.138.0.3 * System Time NTP Server (0.pool.ntp.org) | 2018-05-22 06:45:47 +0000 UTC PMM Server | 2018-05-22 06:45:47 +0000 GMT PMM Client | 2018-05-22 06:45:47 +0000 UTC PMM Server Time Drift | OK PMM Client Time Drift | OK PMM Client to PMM Server Time Drift | OK * Connection: Client --> Server -------------------- ------- SERVER SERVICE STATUS -------------------- ------- Consul API OK Prometheus API OK Query Analytics API OK Connection duration | 408.185µs Request duration | 6.810709ms Full round trip | 7.218894ms No monitoring registered for this node identified as 'mysql1'. [root@mysql1 jerichorivera]# pmm-admin add mysql --create-user [linux:metrics] OK, now monitoring this system. [mysql:metrics] OK, now monitoring MySQL metrics using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock) [mysql:queries] OK, now monitoring MySQL queries from slowlog using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock) [root@mysql1 jerichorivera]# pmm-admin list pmm-admin 1.10.0 PMM Server | 10.138.0.2 Client Name | mysql1 Client Address | 10.138.0.3 Service Manager | linux-systemd -------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------ SERVICE TYPE NAME LOCAL PORT RUNNING DATA SOURCE OPTIONS -------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------ mysql:queries mysql1 - YES pmm:***@unix(/mnt/disks/disk1/data/mysql.sock) query_source=slowlog, query_examples=true linux:metrics mysql1 42000 YES - mysql:metrics mysql1 42002 YES pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)

Lastly, in case you need to delete the PMM Server instance. Just execute this delete command below to completely remove the instance and the attached disk. Be aware that you may remove the boot disk and retain the attached persistent disk if you prefer.

jerichorivera@percona-support:~/GCE$ gcloud compute instances delete pmm-server The following instances will be deleted. Any attached disks configured to be auto-deleted will be deleted unless they are attached to any other instances or the `--keep-disks` flag is given and specifies them for keeping. Deleting a disk is irreversible and any data on the disk will be lost. - [pmm-server] in [us-west1-b] Do you want to continue (Y/n)? y Deleted [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server].

The other option is to install PMM on Google Container engine which was explained by Manjot Singh in his blog post.

The post Setting up PMM on Google Compute Engine in 15 minutes or less appeared first on Percona Database Performance Blog.

Categories: Web Technologies

MariaDB TX 3.0 – First to Deliver on the Promise of Enterprise Open Source

Thu, 05/24/2018 - 12:28
MariaDB TX 3.0 – First to Deliver on the Promise of Enterprise Open Source Shane Johnson Thu, 05/24/2018 - 15:28

It’s one thing to be open source. It’s another to be enterprise open source.

That begs the question: What does it mean to be enterprise open source?

You have to be 100% committed to the open source community – collaboration, transparency and innovation. You have to be 100% committed to customer success – providing the enterprise features and reliability needed to support mission-critical applications.

However, being committed is not enough. You have to be a leader. You have to challenge proprietary vendors, and that includes vendors who limit open source projects with proprietary extensions and/or plugins.

MariaDB TX 3.0 sets the standard for enterprise open source databases, and as the leader, we’re challenging Oracle, Microsoft and IBM with it. Here’s how.

Oracle Database compatibility

MariaDB TX 3.0 is the first enterprise open source database with Oracle Database compatibility, including support for stored procedures written in PL/SQL. Until now, if you needed Oracle Database compatibility, you needed a proprietary database (IBM DB2 or EnterpriseDB). Until now, if you needed Oracle Database compatibility, you needed a proprietary database (IBM DB2 or EnterpriseDB). Today, you can run those Oracle PL/SQL stored procedures on MariaDB TX!

Temporal features

MariaDB TX 3.0 is the first enterprise open source database with temporal features, including built-in system-versioned tables and standard temporal query syntax. Until now, if you needed the functional equivalent of Oracle Flashback queries or Microsoft SQL Server temporal tables, you needed a proprietary database. Today, you can run those temporal queries on MariaDB TX.

Faster schema changes

MariaDB TX 3.0 is the first enterprise open source database to support invisible columns (like Oracle Database), compressed columns and the ability to add columns (with or without default values) to a table without causing all of the rows to be updated (i.e., a table rebuild) – something you can’t do in MySQL or Postgres. Simply said, life is easier with MariaDB TX.

Purpose-built storage

MariaDB TX 3.0 is the first enterprise open source database to support a variety of workloads, all with the same level of performance, by leveraging multiple, purpose-built storage engines: the default storage engine for mixed or read-mostly workloads (InnoDB), an SSD-optimized storage engine for write-intensive workloads (MyRocks) and a distributed storage engine for workloads requiring extreme scalability and/or concurrency (Spider).

While general-purpose databases are limited to supporting one workload really well, MariaDB TX can support a variety of workloads very well – and at the same time. Would you need a NoSQL database if your relational database supported JSON and distributed storage (i.e., scale out)?

You could deploy multiple specialized databases, but wouldn’t you rather standardize on a single database? Well, you can with MariaDB TX.

Data protection

MariaDB TX 3.0 is the first enterprise open source database to support anonymization via complete data obfuscation and psuedoanonymization via full or partial data masking, necessary features assuming you want to comply with EU GDPR and don’t want your company featured in tomorrow's headlines as the featured security breach of the month. If you’re using Oracle Database, these features are part of Oracle Data Redaction, and require Oracle Advanced Security – an extra $7,500 per core. MariaDB TX database administrators sleep well at night.

Conclusion

We created MariaDB TX 3.0 so you can migrate from Oracle/Microsoft/IBM to the enterprise open source database you want without sacrificing the enterprise features you need. Ready?

Login or Register to post comments

Categories: Web Technologies

Comparing MySQL to Vertica Replication under MemCloud, AWS and Bare Metal

Thu, 05/24/2018 - 11:23

Back in December, I did a detailed analysis for getting data into Vertica from MySQL using Tungsten Replicator, all within the Kodiak MemCloud.

I got some good numbers towards the end – 1.9 million rows/minute into Vertica. I did this using a standard replicator deployment, plus some tweaks to the Vertica environment. In particular:

  • Integer hash for a partition for both the staging and base tables
  • Some tweaks to the queries to ensure that we used the partitions in the most efficient manner
  • Optimized the batching within the applier to hit the right numbers for the transaction counts

That last one is a bit of a cheat because in a real-world situation it’s much harder to be able to identify those transaction sizes and row counts, but for testing, we’re trying to get the best performance!

Next what I wanted to do was set up some bare metal and AWS servers that were of an equivalent configuration and see what I could do to repeat and emulate the tests and see what comparable performance we could get.

How I Load Masses of Data

Before I dip into that, however, I thought it would be worth seeing how I generate the information in the first place. With big data testing (mainly when trying to simulate the data that ultimately gets written into your analytics target) the primary concern is one of reproducing the quantity as well as the variety of the data.

It’s application dependent, but for some analytics tasks the inserts are quite high and the updates/deletes relatively low. So I’ve written a test script that generates up to a million rows of data, split to be around 65% inserts, 25% updates and 10% deletes.

I can tweak that of course, but I’ve found it gives a good spread of data. I can also configure whether that happens in one transaction or each row is a transaction of its own. That all gets dumped into an SQL file. A separate wrapper script and tool then load that information into MySQL, either using redirection within the MySQL command line tool or through a different lightweight C++ client I wrote.

The data itself is light, two columns, an auto-incrementing integer ID and a random string. I’m checking for row inserts here, not data sizes.

So, to summarise:

  • Up to 1 million rows (although this is configurable)
  • Single or multiple transactions
  • Single schema/table or numerous schemas/tables
  • Concurrent, multi-threaded inserts

The fundamental result here is that I can predict the number of transactions and rows, which is really important when you are trying to measure rows-per-time period to use as benchmarks with replication because I can also start and stop replication on the transaction count boundaries to get precise performance.

For the main testing that I use for the performance results, what I do is run a multi-threaded, simultaneous insert into 20 schemas/tables and repeat it 40 times with a transaction/row count size of 10,000. That results in 8,000,000 rows of data, first being inserted/updated/deleted into MySQL, then extracted, replicated, and applied to (in this case) Vertica.

For the testing, I then use the start/stop of sequence number controls in the replicator and then monitor the time I start and stop from those numbers.

This gives me stats within about 2 seconds of the probably true result, but over a period of 15-20 minutes, that’s tolerable.

It also means I can do testing in two ways:

  • Start the load test into MySQL and test for completion into Vertica

or

  • Load the data into THL, and test just the target applier (from network transfer to target DB)

For the real-world performance I use the full end-to-end (MySQL insert and target apply) testing

Test Environments

I tested three separate environments, the original MemCloud hosted servers, some bare metal hosts and AWS EC2 hosts:

MemCloud Bare Metal AWS Cores

4

12

16

Threads

4

12

16

RAM

64

192

122

Disk

SSD

SSD

SSD

Networking

10GB

10GB

25GB

It’s always difficult to perfectly match the environments across virtual and bare metal, particularly in AWS, but I did my best.

Results

I could go into all sorts of detailed results here, but I think it’s better to simply look at the final numbers because that is what really matters:

Rows Per Minute Memcloud

1900000

Bare Metal

678222

AWS

492893

Now what’s interesting here is that MemCloud is significantly faster, even though there are fewer CPUs and even lower RAM requirements. It’s perhaps even more surprising to note that MemCloud is more than 4.5x times faster than AWS, even on I/O optimized hosts (probably the limiting factor in Vertica applies).

 

Even against fairly hefty bare metal hosts, MemCloud is almost 3x faster!

I’ve checked in with the engineers on the Bare Metal which seem striking, especially considering these are really beefy hosts, but it may simply be the SSD interface and I/O that becomes a limiting factor. Within Vertica when writing data with the replicator a few things are happening, we write THL to disk, CSV to disk, read CSV from disk into a staging table, then merge the base and staging tables which involves shuffling a lot of blocks in memory (and ultimately disk) around. It may simply be that the high-memory focused environment of MemCloud allows for very much faster performance all round.

I also looked at the performance as I started to increase the number of MySQL sources feeding into the systems, this is to separate schemas, rather than the single, unified schema/table within Vertica.

Sources

1

1

2

3

4

5

Target Schemas

20

40

40

60

80

100

Rows Written

8000000

8000000

16000000

24000000

32000000

40000000

Memcloud

1900057

1972000

3617042

5531460

7353982

9056410

Bare Metal

678222

635753

1051790

1874454

2309055

3168275

AWS

492893

402047

615856

– – –

What is significant here is that with MemCloud I noticed a much more linear ramp up in performance that I didn’t see to the same degree within the Bare metal or AWS. In fact, with AWS I couldn’t even remotely achieve the same levels and by the time I got to three simultaneous sources I got such wildly random results between executions that I gave up trying to test. From experience, I suspect this is due to the networking an IOPS environment, even on a storage optimized host.

The graph version shows the differences more clearly:

 

Bottom line, MemCloud seems really quick, and the statement I made in the original testing still seems to be valid:

The whole thing went so quick I thought it hadn’t executed at all!

Categories: Web Technologies

Proxy MySQL :: HAproxy || ProxySQL & KeepAlived

Wed, 05/23/2018 - 17:05
So when it comes to routing your MySQL traffic several options exist.

Now I have seen HAproxy used more often with clients, it is pretty straight forward to set up. Percona has an example for those interested: 
Personally I like ProxySQL. Percona also has  few blogs on this as well Percona also has ProxySQL version available 
I was thinking I would write up some examples but overall Percona has explained it all very well.  I do not want to take anything away from those posts, instead point out that a lot of good information is available via those urls. So instead of rewriting what has already been written, I will create a collection of information for those interested. 
First compare and decide for yourself what you need and want. The following link of course is going to be biased towards ProxySQL but it gives you an overall scope for you to consider.  If you have a cluster or master to master and you do not care which server the writes vs reads go onto, just as long as you have a connection; then HAproxy is likely a simple fast set up for you. 
The bonus with ProxySQL is the ability to sort traffic in a weighted fashion, EASY. So you can have writes go to node 1, and selects pull from node 2 and node 3. Documentation on this can be found here: Yes it can be done with HAproxy but you have to instruct the application accordingly.  This is handled in ProxySQL based on your query rules.
Now the obvious question here: OK so how do you keep ProxySQL from becoming the single point of failure?  
You can invest is a robust load balancer and etc etc etc ... Toss hardware at it.... Or make it easy on yourself and support open source and use KeepAlived.  This is VERY easy to set up and all of it is documented again well here:  If you ever dealt with lua and mysql-proxy, ProxySQL and Keepalived should be very simple for you. If you still want it for some reason: https://launchpad.net/mysql-proxy
Regardless if you choose HAproxy, ProxySQL or another solution, you need to ensure not to replace once single point of failure with another and keepalived is a great for that. So little reason to not do this if you are using a proxy. 
So a few more things on ProxySQL. 




Categories: Web Technologies

A Look into MariaDB Auditing for GDPR Compliance

Wed, 05/23/2018 - 15:27
A Look into MariaDB Auditing for GDPR Compliance maria-luisaraviol Wed, 05/23/2018 - 18:27

When we are talking about a database auditing concept, what we are focused on is tracking the use of database records, and the monitoring of each operation on the data.

The auditing activities goal is to provide a clear and reliable answer to the typical 4 W questions: Who accessed the database, When did this happen, What was touched, Where this access came from. Auditing should also help the security team answer the 5th W: Why this happened?

Auditing is also a very important task when we want to monitor the database activity to collect information that can help to increase the database performance or debug the application.

When we talk about security, accountability and regulatory compliance Database Auditing plays an even more critical role.

An auditing activity is key in achieving accountability as it allows us to investigate malicious or suspicious database activities. It’s used to help DBAs detect excessive user privileges or suspicious activities coming from specific connections.

In particular, the new European Union General Data Protection Regulation (GDPR) says that it will be important to be able to provide detail of changes to personal data to demonstrate that data protection and security procedures are effective and are being followed. Furthermore, we must ensure that data is only accessed by appropriate parties. This means that we need to be able to say who changed an item of data and when they changed it.

It’s broader than GDPR. HIPAA (Health Insurance Portability and Accountability Act) requires healthcare providers to deliver audit trails about anyone and everyone who touches any data in their records. This is down to the row and record level.

Furthermore, if a data breach occurs, organizations must disclose full information on these events to their local data protection authority (DPA) and all customers concerned with the data breach within 72 hours so they can respond accordingly.

MariaDB Audit Plugin

For all these reasons MariaDB started including the Audit Plugin since version 10.0.10 of MariaDB Server. The purpose of the MariaDB Audit Plugin is to log the server's activity: for each client session, it records who connected to the server (i.e., user name and host), what queries were executed, and which tables were accessed and server variables that were changed.

Events that are logged by the MariaDB Audit Plugin are grouped into three different types: CONNECT, QUERY and TABLE events.

There are actually more types of events to allow fine-tuning of the audit, and focus on just the events and statements relevant for a specific organisation. These are detailed on the Log Settings Page.

There also exist several system variables to configure the MariaDB Audit Plugin. the Server Audit Status Variables page includes all variables relevant to review the status of the auditing. The overall monitoring should include an alert to monitor that the auditing is active.

This information is stored in a rotating log file or it may be sent to the local syslog.

For security reasons, it's sometimes recommended to use the system logs instead of a local file: in this case the value of server_audit_output_type needs to be set to syslog.

It is also possible to set up even more advanced and secure solutions such as using a remote syslog service (Read more about the MariaDB Audit Plugin and setting up a rsyslog).

What does the MariaDB audit log file looks like?

The audit log file is a set of rows in plain text format, written as a list of comma-separated fields to a file. The general format for the logging to the plugin's own file is defined like the following:

[timestamp],[serverhost],[username],[host],[connectionid], [queryid],[operation],[database],[object],[retcode]

If the log file is sent to syslog the format is slightly different as the syslog has its own standard format (refer to the MariaDB Audit Plugin Log Format page for the details).

A typical MariaDB Audit plugin log file example is:

# tail mlr_Test_audit.log 20180421 09:22:38,mlr_Test,root,localhost,22,0,CONNECT,,,0 20180421 09:22:42,mlr_Test,root,localhost,22,35,QUERY,,'CREATE USER IF NOT EXISTS \'mlr\'@\'%\' IDENTIFIED WITH \'mysql_native_password\' AS \'*F44445443BB93ED07F5FAB7744B2FCE47021238F\'',0 20180421 09:22:42,mlr_Test,root,localhost,22,36,QUERY,,'drop user if exists mlr',0 20180421 09:22:45,mlr_Test,root,localhost,22,0,DISCONNECT,,,0 20180421 09:25:29,mlr_Test,root,localhost,20,0,FAILED_CONNECT,,,1045 20180421 09:25:44,mlr_Test,root,localhost,43,133,WRITE,employees,salaries, 20180421 09:25:44,mlr_Test,root,localhost,43,133,QUERY,employees,'DELETE FROM salaries LIMIT 100',0

Audit Files Analysis

Log files are a great source of information but only if you have a system in place to consistently review the data. Also the way you shape your application and database environment is important. In order to get useful auditing, for example, it’s recommended that every human user has his own account.

Furthermore, from the applications standpoint, if those are not using native DB accounts but application based accounts, each application accessing the same server should have its own "application-user".

As we said before, you have to use the information collected and analyse it on a regular basis, and when needed, take immediate actions based on those logged events. However, even small environments can generate a lot of information to be analysed manually.

Starting with the most recent release, Monyog 8.5, the monitoring tool that is included with the MariaDB TX and MariaDB AX subscriptions,  added a very interesting feature for MariaDB: The Audit Log.

This feature parses the audit log maintained by MariaDB Server and displays the content in a clean tabular format.

Monyog accesses the audit log file, the same way it does for other MariaDB log files, including the Slow Query, General Query and Error log.

Through the Monyog interface you can select the server and the time-frame for which you want the audit log to be seen from.  Then, clicking on “SHOW AUDIT LOG” fetches the contents of the log. The limit on the number of rows that can be fetched in one time-frame is 10000.

The snapshot above gives you a quick summary of the audit log in a percentage, like Failed Logins, Failed Events, Schema changes, Data Changes and Stored Procedure. All these legends are clickable and shows the corresponding audit log entries on clicking.

Furthermore, you can use the filter option to fetch audit log based on Username, Host, Operation, Database and Table/Query.

Login or Register to post comments

Categories: Web Technologies

Percona Monitoring and Management 1.11.0 Is Now Available

Wed, 05/23/2018 - 13:37

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

In PMM Release 1.11.0, we deliver the following changes:

  • Configurable MySQL Slow Log Rotation – enable or disable rotation, and specify how many files to keep on disk
  • Predictable Graphs – we’ve updated our formulas to use aggregation functions over time for more reliable graphs
  • MySQL Exporter Parsing of my.cnf – we’ve improved how we read my.cnf
  • Annotation improvements – passing multiple strings results in single annotation being written

The issues in the release includes 1 new features & improvements, and 9 bugs fixed.

MySQL Slow Log Rotation Improvements

We spent some time this release going over how we handle MySQL’s Slow Log rotation logic. Query Analytics requires that slow logging be enabled (either to file, or to PERFORMANCE_SCHEMA) and we found that users of Percona Server for MySQL overwhelmingly choose logging to a file in order to take advantage of log_slow_verbosity which provides enhanced InnoDB Usage information. However, the challenge with MySQL’s Slow Log is that it is very verbose and thus the number one concern is disk space. PMM strives to do no harm and so MySQL Slow Log Rotation was a natural fit, but until this release we were very strict and hadn’t enabled any configuration of these parameters.

Percona Server for MySQL Users have long known about Slow Query Log Rotation and Expiration, but until now had no way of using the in-built Percona Server for MySQL feature while ensuring that PMM wasn’t missing any queries from the Slow Log during file rotation. Or perhaps your use case is that you want to do Slow Log Rotation using logrotate or some other facility. Today with Release 1.11 this is now possible!

We’ve made two significant changes:

  1. You can now specify the number of Slow Log files to remain on disk, and let PMM handle deleting the oldest files first. Default remains unchanged – 1 Slow Log to remain on disk.
  2. Slow Log rotation can now be disabled, for example if you want to manage rotation using logrotate or Percona Server for MySQL Slow Query Log Rotation and Expiration. Default remains unchanged – Slow Log Rotation is ON.

Number of Slow Logs Retained on Disk

Slow Logs Rotation – On or Off

You specify each of these two new controls when setting up the MySQL service. The following example specifies that 5 Slow Log files should remain on disk:

pmm-admin add mysql ... --retain-slow-logs=5

While the following example specifies that Slow Log rotation is to be disabled (flag value of false), with the assumption that you will perform your own Slow Log Rotation:

pmm-admin add mysql ... --slow-log-rotation=false

We don’t currently support modifying option parameters for an existing service definition. This means you must remove, then re-add the service and include the new options.

We’re including a logrotate script in this post to get you started, and it is designed to keep 30 copies of Slow Logs at 1GB each. Note that you’ll need to update the Slow Log location, and ensure a MySQL User Account with SUPER, RELOAD are used for this script to successfully execute.

Example logrotate /var/mysql/mysql-slow.log {     nocompress     create 660 mysql mysql     size 1G     dateext     missingok     notifempty     sharedscripts     postrotate        /bin/mysql -e 'SELECT @@global.long_query_time INTO @LQT_SAVE; SET GLOBAL long_query_time=2000; SELECT SLEEP(2); FLUSH SLOW LOGS; SELECT SLEEP(2); SET GLOBAL long_query_time=@LQT_SAVE;'     endscript     rotate 30 } Predictable Graphs

We’ve updated the logic on four dashboards to better handle predictability and also to allow zooming to look at shorter time ranges.  For example, refreshing PXC/Galera graphs prior to 1.11 led to graphs spiking at different points during the metric series. We’ve reviewed each of these graphs and their corresponding queries and added in <aggregation>_over_time() functions so that graphs display a consistent view of the metric series. This improves your ability to drill in on the dashboards so that no matter how short your time range, you will still observe the same spikes and troughs in your metric series. The four dashboards affected by this improvement are:

  • Home Dashboard
  • PXC/Galera Graphs Dashboard
  • MySQL Overview Dashboard
  • MySQL InnoDB Metrics Dashboard
MySQL Exporter parsing of my.cnf

In earlier releases, the MySQL Exporter expected only key=value type flags. It would ignore options without values (i.e. disable-auto-rehash), and could sometimes read the wrong section of the my.cnf file.  We’ve updated the parsing engine to be more MySQL compatible.

Annotation improvements

Annotations permit the display of an event on all dashboards in PMM.  Users reported that passing more than one string to pmm-admin annotate would generate an error, so we updated the parsing logic to assume all strings passed during annotation creation generates a single annotation event.  Previously you needed to enclose your strings in quotes so that it would be parsed as a single string.

Issues in this release New Features & Improvements
  • PMM-2432 – Configurable MySQL Slow Log File Rotation
Bug fixes
  • PMM-1187 – Graphs breaks at tight resolution 
  • PMM-2362 – Explain is a part of query 
  • PMM-2399 – RPM for pmm-server is missing some files 
  • PMM-2407 – Menu items are not visible on PMM QAN dashboard 
  • PMM-2469 – Parsing of a valid my.cnf can break the mysqld_exporter 
  • PMM-2479 – PXC/Galera Cluster Overview dashboard: typo in metric names 
  • PMM-2484 – PXC/Galera Graphs display unpredictable results each time they are refreshed 
  • PMM-2503 – Wrong InnoDB Adaptive Hash Index Statistics 
  • PMM-2513 – QAN-agent always changes max_slowlog_size to 0 
  • PMM-2514 – pmm-admin annotate help – fix typos
  • PMM-2515 – pmm-admin annotate – more than 1 annotation 
How to get PMM

PMM is available for installation using three methods:

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management 1.11.0 Is Now Available appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Configuring MySQL in a Docker Container

Wed, 05/23/2018 - 06:16

In recent weeks I’ve been focusing on Docker in order to get a much better understanding of the containerized world that is materializing in front of us. Containers aren’t just for stateless applications anymore and we’re seeing more cases where MySQL and other databases are being launched in a containerized fashion, so it’s important to know how to configure your MySQL container!

In docker hub, you will see an option for this by doing a volume mount from the docker host to the container on /etc/mysql/conf.d. But the problem is that the container image you’re using may not have an !includedir referencing the conf.d directory, much like the latest version of mysql community, as you will see below.

[root@centos7-1 ~]# docker run --memory-swappiness=1 --memory=2G -p 3306:3306 --name=mysql1 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:5.7.22 [root@centos7-1 ~]# docker exec -it mysql1 cat /etc/my.cnf | grep -i include [root@centos7-1 ~]#

This means that if you use the prescribed method of placing a config file in /etc/mysql/conf.d in the container, it’s not going to be read and will have no impact on the configuration of the underlying MySQL instance.

You might think that the next step would be to attach to the container, modify the my.cnf file (after installing a text editor) and adding the !includedir in your my.cnf file, but this goes against the docker / containerization philosophy. You should be able to just launch a container with the appropriate arguments and be off to fight the universe’s data battles. So in this case, I would propose the following workaround:

Instead of using /etc/mysql/conf.d, we can look at the mysql option file reference and realize there is more than one place we can put a config file. In fact, it looks like the next place mysql is going to look for configuration is going to be /etc/mysql/my.cnf and if we check our recently deployed container, we’ll see that /etc/mysql isn’t used.

[root@centos7-1 ~]# docker exec -it mysql1 ls /etc/mysql ls: cannot access /etc/mysql: No such file or directory

We can mount a volume with a my.cnf file to this directory on the container and it should pick up whatever configuration we supply, as demonstrated below.

[root@centos7-1 ~]# docker stop mysql1 mysql1 [root@centos7-1 ~]# docker rm mysql1 mysql1 [root@centos7-1 ~]# cat /mysqlcnf/mysql1/my.cnf [mysqld] server-id=123 [root@centos7-1 ~]# docker run --memory-swappiness=1 --memory=2G -p 3306:3306 -v /mysqlcnf/mysql1:/etc/mysql --name=mysql1 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:5.7.22 d5d980ee01d5b4707f3a7ef5dd30df1d780cdfa35b14ad22ff436fb02560be1b [root@centos7-1 ~]# docker exec -it mysql1 cat /etc/mysql/my.cnf [mysqld] server-id=123 [root@centos7-1 ~]# docker exec -it mysql1 mysql -u root -ppassword -e "show global variables like 'server_id'" mysql: [Warning] Using a password on the command line interface can be insecure. +---------------+-------+ | Variable_name | Value | +---------------+-------+ | server_id | 123 | +---------------+-------+ [root@centos7-1 ~]#

Another option for doing this is overriding the my.cnf file in /etc/ with our own version. You can do this with a mount as noted in the mysql reference for Persisting Data and Configuration Changes, but in that case you will be overwriting other items that might be included in the my.cnf as part of the docker build. This may or may not be your intention depending on how you want to deploy your containers.

Conclusion

Be aware of the container image you’re using and what configuration options are available to you. Some forks will include a !includedir reference to /etc/mysql/conf.d, some won’t. You may want to overwrite the entire my.cnf file by volume mounting to a copy of the my.cnf on the docker host. Or you may just want to supplement the configuration with a second configuration file in /etc/mysql. The important things are to test, to make sure your configuration is properly read by the mysql container, and to establish confidence in the configuration method used before deploying in your environment.

Categories: Web Technologies

Happy Birthday MySQL 1995

Wed, 05/23/2018 - 05:32
Happy Birthday MySQL  ! Turned 23 today !
Categories: Web Technologies

When Your JSON Key is Numeric

Tue, 05/22/2018 - 16:01
There was an interesting question on Stackoverflow.com on extracting values from a JSON data type column in a MySQL database.  What caught my eye was the the keys for the key/value pairs were numeric. In particular the author of the question only wanted values for the key named 74.  The sample data was fairly simple.

{ "70" : "Apple", "71" : "Peach", "74" : "Kiwi" }

I thought SELECT JSON_EXTRACT(column, '$.74') FROM table; should work but it did not. There was a complaint about an invalid path expression.

It turns out that you need to make the second argument in the function '$."74"' or SELECT JSON_EXTRACT(column,'$."74"') FROM table; 

File this under something to remember for later. :-)
Categories: Web Technologies

Percona Toolkit 3.0.10 Is Now Available

Tue, 05/22/2018 - 13:32

Percona announces the release of Percona Toolkit 3.0.10 on May 22, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New Features:
  • PT-131: pt-table-checksum disables the QRT plugin
    The Query Response Time Plugin provides a tool for analyzing information by counting and displaying the number of queries according to the length of time they took to execute. This feature enables a new flag --disable-qrt-plugin  that leverages Percona Server for MySQL’s new ability to disable QRT plugin at the session level. The advantage to enabling this Toolkit feature is that the QRT metrics are not impacted by the work that pt-table-checksum performs. This means that QRT metrics report only the work your Application is generating on MySQL, and not clouded by the activities of pt-table-checksum.
  • PT-118: pt-table-checksum reports the number of rows of difference between master and slave
    We’re adding support for pt-table-checksum to identify the number of row differences between master and slave. Previously you were able to see only the count of chunks that differed between hosts. This is helpful for situations where you believe you can tolerate some measure of row count drift between hosts, but want to be precise in understanding what that row count difference actually is.
Improvements
  • PT-1546: Improved support for MySQL 8 roles
  • PT-1543: The encrypted table status query causes high load over multiple minutes
    Users reported that listing encrypted table status can be very slow.  We’ve enabled this functionality via --list-encrypted-tables and set it to default of disabled.
  • PT-1536: Added info about encrypted tablespaces in pt-mysql-summary
    We’ve improved pt-mysql-summary to now include information about encrypted tablespaces.  This information is available by using --list-encrypted-tables .
Bug Fixes:
  • PT-1556: pt-table-checksum 3.0.9 does not change binlog_format to statement any more.

pt-show-grants has several known issues when working with MySQL 8 and roles, which Percona aims to address in subsequent Percona Toolkit releases: PT-1560PT-1559, and PT-1558

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.10 Is Now Available appeared first on Percona Database Performance Blog.

Categories: Web Technologies

New Stats Exposed in Go's database/SQL Package

Tue, 05/22/2018 - 13:18

If you’re someone who keeps up with the Go development cycle, then you’ll know that a couple of weeks ago Go entered its feature-freeze for the Go 1.11 release. One of the changes for this upcoming release that caught my eye was to the database/sql package. Daniel Theophanes contributed a change that introduces several new counters available via the DB.Stats() method.

If you’re not familiar with it, DB.Stats() returns a DBStat structure containing information about the underlying sql.DB that the method is called on. Up to this point, the struct has had a single field, tracking the current number of open connections to the database. Daniel’s patch introduces a number of additional fields though:

  • MaxOpenConnections: The max allowed open connections to the DB, as set by DB.SetMaxOpenConns.
  • InUse: The number of connections actively in-use.
  • Idle: The number of open connections that are currently idle.
  • WaitCount: The total number of times that a goroutine has had to wait for a connection.
  • WaitDuration: The cumulative amount of time that goroutines have spent waiting for a connection.
  • MaxIdleClosed: The number of connections closed according to the limit specified by DB.SetMaxIdleConns.
  • MaxLifetimeClosed: The number of connections closed because they exceeded the duration specified by DB.SetConnMaxLifetime.

Note that of the above fields, WaitCount, WaitDuration, MaxIdleClosed and MaxLifetimeClosed are all counters; that is to say, their values never decrease over the lifetime of the DB object, they only increase over time.

The new stats will be available when Go 1.11 is released, which is projected to be available in August. In the meantime, if you aren’t publishing DBStats metrics in your applications today you can work on adding it to integrate into a metrics collector such as Graphite, Prometheus, or even VividCortex. The call to DB.Stats() is cheap and thread-safe, so it’s fairly easy to spawn another goroutine to call it periodically and forward the data to a metrics collector of your choice.

The new information here makes the DB.Stats() command much more useful for monitoring the behavior of database connections. In particular, as noted by Daniel in the commit message, if you see a high amount of waiting or closed connections it may indicate that you need to tune the settings for your DB object. I’ll be adding new metrics to our applications once we upgrade to Go 1.11, you should add them to yours as well!

Can't get enough Go? Check out our free eBook The Ultimate Guide to Building Database-Driven Apps with Go, our free webinar Developing MySQL Applications with Go, or sharpen your skills with the Go database/sql package tutorial

Ready, set, Go!

Categories: Web Technologies

MySQL 8.0: MVCC of Large Objects in InnoDB

Tue, 05/22/2018 - 02:58

In this article, I’ll explain about the multi version concurrency control (MVCC) of large objects (LOBs) design in the MySQL InnoDB storage engine.  MySQL 8.0 has a new feature that allows users to partially update large objects, including the JSON documents.  …

Categories: Web Technologies

MySQL master discovery methods, part 6: other methods

Tue, 05/22/2018 - 01:39

This is the sixth in a series of posts reviewing methods for MySQL master discovery: the means by which an application connects to the master of a replication tree. Moreover, the means by which, upon master failover, it identifies and connects to the newly promoted master.

These posts are not concerned with the manner by which the replication failure detection and recovery take place. I will share orchestrator specific configuration/advice, and point out where cross DC orchestrator/raft setup plays part in discovery itself, but for the most part any recovery tool such as MHA, replication-manager, severalnines or other, is applicable.

Hard coded configuration deployment

You may use your source/config repo as master service discovery method of sorts.

The master's identity would be hard coded into your, say, git repo, to be updated and deployed to production upon failover.

This method is simple and I've seen it being used by companies, in production. Noteworthy:

  • This requires a dependency of production on source availability.
    • The failover tool would need to have access to your source environment.
  • This requires a dependency of production on build/deploy flow.
    • The failover tool would need to kick build, test, deploy process.
  • Code deployment time can be long.
  • Deployment must take place on all relevant hosts, and cause for a mass refresh/reload.
    • It should interrupt processes that cannot reload themselves, such as various commonly used scripts.
Synchronous replication

This series of posts is focused on asynchronous replication, but we will do well to point out a few relevant notes on sychnronous replication (Galera, XtraDB Cluster, InnoDB Cluster).

  • Synchronous replication can act in single-writer mode or in multi-writer mode.
  • In single writer mode, apps should connect to a particular master.
    • The identity of such master can be achieved by querying the MySQL members of the cluster.
  • In multi-writer mode, apps can connect to any healthy member of the cluster.
    • This still calls for a check: is the member healthy?
  • Syncronous replication is not intended to work well cross DC.

The last bullet should perhaps be highlighted. In a cross-DC setup, and for cross-DC failovers, we are back to same requirements as with asynchronous replication, and the methods illustrated in this series of posts may apply.

  • VIPs make less sense.
  • Proxy-based solution make a lot of sense.
All posts in this series
Categories: Web Technologies

Pages

1 2 3 4 5 6 7 8 9 next › last »