emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Error message

  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2063 of /homepages/18/d193362201/htdocs/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2073 of /homepages/18/d193362201/htdocs/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2063 of /homepages/18/d193362201/htdocs/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2073 of /homepages/18/d193362201/htdocs/includes/common.inc).

When Your JSON Key is Numeric

Planet MySQL - Tue, 05/22/2018 - 16:01
There was an interesting question on Stackoverflow.com on extracting values from a JSON data type column in a MySQL database.  What caught my eye was the the keys for the key/value pairs were numeric. In particular the author of the question only wanted values for the key named 74.  The sample data was fairly simple.

{ "70" : "Apple", "71" : "Peach", "74" : "Kiwi" }

I thought SELECT JSON_EXTRACT(column, '$.74') FROM table; should work but it did not. There was a complaint about an invalid path expression.

It turns out that you need to make the second argument in the function '$."74"' or SELECT JSON_EXTRACT(column,'$."74"') FROM table; 

File this under something to remember for later. :-)
Categories: Web Technologies

Percona Toolkit 3.0.10 Is Now Available

Planet MySQL - Tue, 05/22/2018 - 13:32

Percona announces the release of Percona Toolkit 3.0.10 on May 22, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New Features:
  • PT-131: pt-table-checksum disables the QRT plugin
    The Query Response Time Plugin provides a tool for analyzing information by counting and displaying the number of queries according to the length of time they took to execute. This feature enables a new flag --disable-qrt-plugin  that leverages Percona Server for MySQL’s new ability to disable QRT plugin at the session level. The advantage to enabling this Toolkit feature is that the QRT metrics are not impacted by the work that pt-table-checksum performs. This means that QRT metrics report only the work your Application is generating on MySQL, and not clouded by the activities of pt-table-checksum.
  • PT-118: pt-table-checksum reports the number of rows of difference between master and slave
    We’re adding support for pt-table-checksum to identify the number of row differences between master and slave. Previously you were able to see only the count of chunks that differed between hosts. This is helpful for situations where you believe you can tolerate some measure of row count drift between hosts, but want to be precise in understanding what that row count difference actually is.
Improvements
  • PT-1546: Improved support for MySQL 8 roles
  • PT-1543: The encrypted table status query causes high load over multiple minutes
    Users reported that listing encrypted table status can be very slow.  We’ve enabled this functionality via --list-encrypted-tables and set it to default of disabled.
  • PT-1536: Added info about encrypted tablespaces in pt-mysql-summary
    We’ve improved pt-mysql-summary to now include information about encrypted tablespaces.  This information is available by using --list-encrypted-tables .
Bug Fixes:
  • PT-1556: pt-table-checksum 3.0.9 does not change binlog_format to statement any more.

pt-show-grants has several known issues when working with MySQL 8 and roles, which Percona aims to address in subsequent Percona Toolkit releases: PT-1560PT-1559, and PT-1558

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.10 Is Now Available appeared first on Percona Database Performance Blog.

Categories: Web Technologies

New Stats Exposed in Go's database/SQL Package

Planet MySQL - Tue, 05/22/2018 - 13:18

If you’re someone who keeps up with the Go development cycle, then you’ll know that a couple of weeks ago Go entered its feature-freeze for the Go 1.11 release. One of the changes for this upcoming release that caught my eye was to the database/sql package. Daniel Theophanes contributed a change that introduces several new counters available via the DB.Stats() method.

If you’re not familiar with it, DB.Stats() returns a DBStat structure containing information about the underlying sql.DB that the method is called on. Up to this point, the struct has had a single field, tracking the current number of open connections to the database. Daniel’s patch introduces a number of additional fields though:

  • MaxOpenConnections: The max allowed open connections to the DB, as set by DB.SetMaxOpenConns.
  • InUse: The number of connections actively in-use.
  • Idle: The number of open connections that are currently idle.
  • WaitCount: The total number of times that a goroutine has had to wait for a connection.
  • WaitDuration: The cumulative amount of time that goroutines have spent waiting for a connection.
  • MaxIdleClosed: The number of connections closed according to the limit specified by DB.SetMaxIdleConns.
  • MaxLifetimeClosed: The number of connections closed because they exceeded the duration specified by DB.SetConnMaxLifetime.

Note that of the above fields, WaitCount, WaitDuration, MaxIdleClosed and MaxLifetimeClosed are all counters; that is to say, their values never decrease over the lifetime of the DB object, they only increase over time.

The new stats will be available when Go 1.11 is released, which is projected to be available in August. In the meantime, if you aren’t publishing DBStats metrics in your applications today you can work on adding it to integrate into a metrics collector such as Graphite, Prometheus, or even VividCortex. The call to DB.Stats() is cheap and thread-safe, so it’s fairly easy to spawn another goroutine to call it periodically and forward the data to a metrics collector of your choice.

The new information here makes the DB.Stats() command much more useful for monitoring the behavior of database connections. In particular, as noted by Daniel in the commit message, if you see a high amount of waiting or closed connections it may indicate that you need to tune the settings for your DB object. I’ll be adding new metrics to our applications once we upgrade to Go 1.11, you should add them to yours as well!

Can't get enough Go? Check out our free eBook The Ultimate Guide to Building Database-Driven Apps with Go, our free webinar Developing MySQL Applications with Go, or sharpen your skills with the Go database/sql package tutorial

Ready, set, Go!

Categories: Web Technologies

MySQL 8.0: MVCC of Large Objects in InnoDB

Planet MySQL - Tue, 05/22/2018 - 02:58

In this article, I’ll explain about the multi version concurrency control (MVCC) of large objects (LOBs) design in the MySQL InnoDB storage engine.  MySQL 8.0 has a new feature that allows users to partially update large objects, including the JSON documents.  …

Categories: Web Technologies

MySQL master discovery methods, part 6: other methods

Planet MySQL - Tue, 05/22/2018 - 01:39

This is the sixth in a series of posts reviewing methods for MySQL master discovery: the means by which an application connects to the master of a replication tree. Moreover, the means by which, upon master failover, it identifies and connects to the newly promoted master.

These posts are not concerned with the manner by which the replication failure detection and recovery take place. I will share orchestrator specific configuration/advice, and point out where cross DC orchestrator/raft setup plays part in discovery itself, but for the most part any recovery tool such as MHA, replication-manager, severalnines or other, is applicable.

Hard coded configuration deployment

You may use your source/config repo as master service discovery method of sorts.

The master's identity would be hard coded into your, say, git repo, to be updated and deployed to production upon failover.

This method is simple and I've seen it being used by companies, in production. Noteworthy:

  • This requires a dependency of production on source availability.
    • The failover tool would need to have access to your source environment.
  • This requires a dependency of production on build/deploy flow.
    • The failover tool would need to kick build, test, deploy process.
  • Code deployment time can be long.
  • Deployment must take place on all relevant hosts, and cause for a mass refresh/reload.
    • It should interrupt processes that cannot reload themselves, such as various commonly used scripts.
Synchronous replication

This series of posts is focused on asynchronous replication, but we will do well to point out a few relevant notes on sychnronous replication (Galera, XtraDB Cluster, InnoDB Cluster).

  • Synchronous replication can act in single-writer mode or in multi-writer mode.
  • In single writer mode, apps should connect to a particular master.
    • The identity of such master can be achieved by querying the MySQL members of the cluster.
  • In multi-writer mode, apps can connect to any healthy member of the cluster.
    • This still calls for a check: is the member healthy?
  • Syncronous replication is not intended to work well cross DC.

The last bullet should perhaps be highlighted. In a cross-DC setup, and for cross-DC failovers, we are back to same requirements as with asynchronous replication, and the methods illustrated in this series of posts may apply.

  • VIPs make less sense.
  • Proxy-based solution make a lot of sense.
All posts in this series
Categories: Web Technologies

Replication Test Coverage in MySQL 8.0

Planet MySQL - Tue, 05/22/2018 - 00:15

In the Replication QA team, we have been continuing to improve test coverage for Replication and Group Replication primarily to ensure that we support the newest and latest offerings of MySQL Server 8.0 such as Generated Columns, Set Persist, User Roles, and User Management DDLs that are now atomic.…

Categories: Web Technologies

Webinar Wed, 5/23: Troubleshooting MySQL Concurrency Issues with Load Testing Tools

Planet MySQL - Mon, 05/21/2018 - 11:41

Please join Percona’s Principal Support Escalation Specialist, Sveta Smirnova, as she presents Troubleshooting MySQL Concurrency Issues with Load Testing Tools on Wednesday, May 23, 2018 at 11:00 AM PDT (UTC-7) / 2:00 PM EDT (UTC-4).

Register Now

 

Normally, we use benchmarking tools when we are developing applications. When applications are deployed, benchmarks tests are usually too late to help.

This webinar doesn’t cover actual benchmarks, but it does look at how you can use benchmarking tools for troubleshooting. When you need to repeat a situation caused by concurrent client execution, they can be your best option. These types of issues include all kinds of locking and performance issues, along with stalls and crashes.

In this webinar Sveta will cover some of the main tools she uses, such as (but not limited to) SysBench and mysqlslap. She will show how to use the tools’ standard options while working with specific custom problems, and how to script them to develop test cases that are as close to real life scenarios as possible.

Register for the webinar.

Sveta Smirnova, Principal Support Escalation Specialist

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can quickly solve typical issues and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona, Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

The post Webinar Wed, 5/23: Troubleshooting MySQL Concurrency Issues with Load Testing Tools appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Capturing Per-Process Metrics with Percona Monitoring and Management (PMM)

Planet MySQL - Mon, 05/21/2018 - 08:36

In this blog post, I will show you how to use Percona Monitoring and Management (PMM) to capture per-process metrics in five minutes or less.

While Percona Monitoring and Management (PMM) captures a lot of host metrics, it currently falls short providing per-process information, such as which particular process uses a lot of CPU resources, causes Disk IO or consumes a lot of memory.

In our database performance optimization and troubleshooting practice, this information has proven quite useful in many cases: batch jobs taking much more resources than developers would estimate and misconfigured Percona XtraBackup or Percona Toolkit are among the most common offenders.

Per-process metrics information can also be very helpful when troubleshooting database software memory leaks or memory fragmentation.

You don’t know which processes cause you problems at the outset, so it is important to capture information about all of the processes (or specifically exclude the processes you do not want to capture information about) rather than capture information about selected few.

While capturing such helpful information is not available in PMM out of the box (yet), you can easily achieve it using PMM’s External Exporter support and the excellent Prometheus Process Exporter by Nick Cabatoff.

These instructions are for Debian/Ubuntu  Linux Distributions but they should work with RedHat/CentOS based versions as well – just use RPM package instead of DEB

1: Download the process exporter packages from GitHub:

wget https://github.com/ncabatoff/process-exporter/releases/download/v0.2.11/process-exporter_0.2.11_linux_amd64.deb

2: Install the package

(Note: the file will be different depending on the platform and current release.)

dpkg -i process-exporter_0.2.11_linux_amd64.deb

3: Run the Exporter

service process-exporter start

4: Register Exporter with Percona Monitoring and Management

Assuming the current node is already monitored by PMM you just need one command:

pmm-admin add external:service processes-my-host --service-port=9256 --interval=10s

This captures process metrics every 10 seconds (adjust interval if desired).

Important note: due to some internal limitations, you need to use a different service name (“processes-my-host”)  for each host. I suggest just adding the hostname to the descriptive name “processes” for simplicity.

5: Get Matching Dashboard from Grafana.com

While you can browse the data captured by the Advanced Data Exploration Dashboard, it is not any fun. I created a PMM-style dashboard and published it on Grafana.com. I based it on Nick’s original dashboard.

To add this dashboard to your PMM Server, click Dashboard Search on your PMM Server.

From there, click on “Import Dashboard”. Use 6033 as the Grafana.com Dashboard ID.

6: You’re done!

You should have data flowing, and you should be able to see the data on the graphs.

In this example, I have pt-query-digest (shown as Perl) parsing the log file and pushing MySQL Server away from memory.

Note, as you likely have many processes on the system, the graphs are designed to show only the top processes. All running processes, however, are available in the drop-down if you want to access the history for a specific process.

Let us know what you think. We are looking at how to integrate this functionality directly into Percona Monitoring and Management!

The post Capturing Per-Process Metrics with Percona Monitoring and Management (PMM) appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Experience, Not Conversion, is the Key to the Switching Economy

Planet MySQL - Mon, 05/21/2018 - 06:00

In a world increasingly defined by instant-gratification, the demand for positive and direct shopping experiences has risen exponentially. Today’s always-on customers are drawn to the most convenient products and services available. As a result, we are witnessing higher customer switching rates, with consumers focusing more on convenience than on branding, reputation, or even on price.  

In this switching economy – where information and services are always just a click away –  we tend to reach for what suits our needs in the shortest amount of time. This shift in decision making has made it harder than ever for businesses to build loyalty among their customers and to guarantee repeat purchases. According to recent research, only 1 in 5 consumers now consider it a hassle to switch between brands, while a third would rather shop for better deals than stay loyal to a single organization. 

What's Changed? 

The consumer mindset for one. And the switching tools available to customers have also changed. Customers now have the ability to research extensively before they purchase, with access to reviews and price comparison sites often meaning that consumers don’t even make it to a your website before being captured by a competitor. 

This poses a serious concern for those brands that have devoted their time – and marketing budgets – to building great customer experiences across their websites. 

Clearly this is not to say that on-site experiences aren’t important, but rather that they are only one part of the wider customer journey. In an environment as complex and fast moving as the switching economy, you must look to take a more omnichannel approach to experience, examining how your websites, mobile apps, customer service teams, external reviews and in-store experiences are all shaping the customers’ perceptions of your brand. 

What Still Needs to Change?

Only by getting to know your customers across all of these different channels can you future-proof your brand in the switching economy. To achieve this, you must establish a new set of metrics that go beyond website conversion. The days of conversion optimization being viewed as the secret sauce for competitive differentiation are over; now brands must recognize that high conversion rates are not necessarily synonymous with a great customer experience – or lifetime loyalty. 

Today, the real measure of success does not come from conversion, but from building a true understanding of your customers – across every touchpoint in the omnichannel journey. Through the rise of experience analytics, you finally have the tools and technologies needed to understand customers in this way, and to tailor all aspects of your brand to maximize convenience, encourage positive mindsets and pre-empt when your customers are planning to switch to a different brand. 

It is only through this additional layer of insight that businesses and brands will rebuild the notion of customer loyalty, and ultimately, overcome the challenges of the switching economy. 

Want to learn more about simplifying and improving the customer experience? Read Customer Experience Simplified: Deliver The Experience Your Customers Want to discover how to provide customer experiences that are managed as carefully as the product, the price, and the promotion of the marketing mix.

Categories: Web Technologies

Understanding Deadlocks in MySQL & PostgreSQL

Planet MySQL - Mon, 05/21/2018 - 03:16

Working with databases, concurrency control is the concept that ensures that database transactions are performed concurrently without violating data integrity.

There is a lot of theory and different approaches around this concept and how to accomplish it, but we will briefly refer to the way that PostgreSQL and MySQL (when using InnoDB) handle it, and a common problem that can arise in highly concurrent systems: deadlocks.

These engines implement concurrency control by using a method called MVCC (Multiversion Concurrency Control). In this method, when an item is being updated, the changes will not overwrite the original data, but instead a new version of the item (with the changes) will be created. Thus we will have several versions of the item stored.

One of the main advantages of this model is that locks acquired for querying (reading) data do not conflict with locks acquired for writing data, and so reading never blocks writing and writing never blocks reading.

But, if several versions of the same item are stored, which version of it will a transaction see? To answer that question we need to review the concept of transaction isolation. Transactions specify an isolation level, that defines the degree to which one transaction must be isolated from resource or data modifications made by other transactions.This degree is directly related with the locking generated by a transaction, and so, as it can be specified at transaction level, it can determine the impact that a running transaction can have over other running transactions.

This is a very interesting and long topic, although we will not go into too much details in this blog. We’d recommend the PostgreSQL and MySQL official documentation for further reading on this topic.

So, why are we going into the above topics when dealing with deadlocks? Because sql commands will automatically acquire locks to ensure the MVCC behaviour, and the lock type acquired depends on the transaction isolation defined.

There are several types of locks (again another long and interesting topic to review for PostgreSQL and MySQL) but, the important thing about them, is how they interact (most exactly, how they conflict) with each other. Why is that? Because two transactions cannot hold locks of conflicting modes on the same object at the same time. And a non minor detail, once acquired, a lock is normally held till end of transaction.

This is a PostgreSQL example of how locking types conflict with each other:

PostgreSQL Locking types conflict

And for MySQL:

MySQL Locking types conflict

X= exclusive lock         IX= intention exclusive lock
S= shared lock         IS= intention shared lock

So what happens when I have two running transactions that want to hold conflicting locks on the same object at the same time? One of them will get the lock and the other will have to wait.

So now we are in a position to truly understand what is happening during a deadlock.

What is a deadlock then? As you can imagine, there are several definitions for a database deadlock, but i like the following for its simplicity.

A database deadlock is a situation in which two or more transactions are waiting for one another to give up locks.

So for example, the following situation will lead us to a deadlock:

Deadlock example

Here, the application A gets a lock on table 1 row 1 in order to make an update.

At the same time application B gets a lock on table 2 row 2.

Now application A needs to get a lock on table 2 row 2, in order to continue the execution and finish the transaction, but it cannot get the lock because it is held by application B. Application A needs to wait for application B to release it.

But application B needs to get a lock on table 1 row 1, in order to continue the execution and finish the transaction, but it cannot get the lock because it is held by application A.

So here we are in a deadlock situation. Application A is waiting for the resource held by application B in order to finish and application B is waiting for the resource held by application A. So, how to continue? The database engine will detect the deadlock and kill one of the transactions, unblocking the other one and raising a deadlock error on the killed one.

Let's check some PostgreSQL and MySQL deadlock examples:

PostgreSQL

Suppose we have a test database with information from the countries of the world.

world=# SELECT code,region,population FROM country WHERE code IN ('NLD','AUS'); code | region | population ------+---------------------------+------------ NLD | Western Europe | 15864000 AUS | Australia and New Zealand | 18886000 (2 rows)

We have two sessions that want to make changes to the database.

The first session will modify the region field for the NLD code, and the population field for the AUS code.

The second session will modify the region field for the AUS code, and the population field for the NLD code.

Table data:

code: NLD region: Western Europe population: 15864000 code: AUS region: Australia and New Zealand population: 18886000

Session 1:

world=# BEGIN; BEGIN world=# UPDATE country SET region='Europe' WHERE code='NLD'; UPDATE 1

Session 2:

world=# BEGIN; BEGIN world=# UPDATE country SET region='Oceania' WHERE code='AUS'; UPDATE 1 world=# UPDATE country SET population=15864001 WHERE code='NLD';

Session 2 will hang waiting for Session 1 to finish.

Session 1:

world=# UPDATE country SET population=18886001 WHERE code='AUS'; ERROR: deadlock detected DETAIL: Process 1181 waits for ShareLock on transaction 579; blocked by process 1148. Process 1148 waits for ShareLock on transaction 578; blocked by process 1181. HINT: See server log for query details. CONTEXT: while updating tuple (0,15) in relation "country"

Here we have our deadlock. The system detected the deadlock and killed session 1.

Session 2:

world=# BEGIN; BEGIN world=# UPDATE country SET region='Oceania' WHERE code='AUS'; UPDATE 1 world=# UPDATE country SET population=15864001 WHERE code='NLD'; UPDATE 1

And we can check that the second session finished correctly after the deadlock was detected and the Session 1 was killed (thus, the lock was released).

To have more details we can see the log in our PostgreSQL server:

2018-05-16 12:56:38.520 -03 [1181] ERROR: deadlock detected 2018-05-16 12:56:38.520 -03 [1181] DETAIL: Process 1181 waits for ShareLock on transaction 579; blocked by process 1148. Process 1148 waits for ShareLock on transaction 578; blocked by process 1181. Process 1181: UPDATE country SET population=18886001 WHERE code='AUS'; Process 1148: UPDATE country SET population=15864001 WHERE code='NLD'; 2018-05-16 12:56:38.520 -03 [1181] HINT: See server log for query details. 2018-05-16 12:56:38.520 -03 [1181] CONTEXT: while updating tuple (0,15) in relation "country" 2018-05-16 12:56:38.520 -03 [1181] STATEMENT: UPDATE country SET population=18886001 WHERE code='AUS'; 2018-05-16 12:59:50.568 -03 [1181] ERROR: current transaction is aborted, commands ignored until end of transaction block

Here we will be able to see the actual commands that were detected on deadlock.

Download the Whitepaper Today   PostgreSQL Management & Automation with ClusterControl Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL Download the Whitepaper MySQL

To simulate a deadlock in MySQL we can do the following.

As with PostgreSQL, suppose we have a test database with information on actors and movies among other things.

mysql> SELECT first_name,last_name FROM actor WHERE actor_id IN (1,7); +------------+-----------+ | first_name | last_name | +------------+-----------+ | PENELOPE | GUINESS | | GRACE | MOSTEL | +------------+-----------+ 2 rows in set (0.00 sec)

We have two processes that want to make changes to the database.

The first process will modify the field first_name for actor_id 1, and the field last_name for actor_id 7.

The second process will modify the field first_name for actor_id 7, and the field last_name for actor_id 1.

Table data:

actor_id: 1 first_name: PENELOPE last_name: GUINESS actor_id: 7 first_name: GRACE last_name: MOSTEL

Session 1:

mysql> set autocommit=0; Query OK, 0 rows affected (0.00 sec) mysql> BEGIN; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE actor SET first_name='GUINESS' WHERE actor_id='1'; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0

Session 2:

mysql> set autocommit=0; Query OK, 0 rows affected (0.00 sec) mysql> BEGIN; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE actor SET first_name='MOSTEL' WHERE actor_id='7'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> UPDATE actor SET last_name='PENELOPE' WHERE actor_id='1';

Session 2 will hang waiting for Session 1 to finish.

Session 1:

mysql> UPDATE actor SET last_name='GRACE' WHERE actor_id='7'; ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction

Here we have our deadlock. The system detected the deadlock and killed session 1.

Session 2:

mysql> set autocommit=0; Query OK, 0 rows affected (0.00 sec) mysql> BEGIN; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE actor SET first_name='MOSTEL' WHERE actor_id='7'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> UPDATE actor SET last_name='PENELOPE' WHERE actor_id='1'; Query OK, 1 row affected (8.52 sec) Rows matched: 1 Changed: 1 Warnings: 0

As we can see in the error, as we saw for PostgreSQL, there is a deadlock between both processes.

For more details we can use the command SHOW ENGINE INNODB STATUS\G:

mysql> SHOW ENGINE INNODB STATUS\G ------------------------ LATEST DETECTED DEADLOCK ------------------------ 2018-05-16 18:55:46 0x7f4c34128700 *** (1) TRANSACTION: TRANSACTION 1456, ACTIVE 33 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1 MySQL thread id 54, OS thread handle 139965388506880, query id 15876 localhost root updating UPDATE actor SET last_name='PENELOPE' WHERE actor_id='1' *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1456 lock_mode X locks rec but not gap waiting Record lock, heap no 2 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0001; asc ;; 1: len 6; hex 0000000005af; asc ;; 2: len 7; hex 2d000001690110; asc - i ;; 3: len 7; hex 4755494e455353; asc GUINESS;; 4: len 7; hex 4755494e455353; asc GUINESS;; 5: len 4; hex 5afca8b3; asc Z ;; *** (2) TRANSACTION: TRANSACTION 1455, ACTIVE 47 sec starting index read, thread declared inside InnoDB 5000 mysql tables in use 1, locked 1 3 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1 MySQL thread id 53, OS thread handle 139965267871488, query id 16013 localhost root updating UPDATE actor SET last_name='GRACE' WHERE actor_id='7' *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1455 lock_mode X locks rec but not gap Record lock, heap no 2 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0001; asc ;; 1: len 6; hex 0000000005af; asc ;; 2: len 7; hex 2d000001690110; asc - i ;; 3: len 7; hex 4755494e455353; asc GUINESS;; 4: len 7; hex 4755494e455353; asc GUINESS;; 5: len 4; hex 5afca8b3; asc Z ;; *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1455 lock_mode X locks rec but not gap waiting Record lock, heap no 202 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0007; asc ;; 1: len 6; hex 0000000005b0; asc ;; 2: len 7; hex 2e0000016a0110; asc . j ;; 3: len 6; hex 4d4f5354454c; asc MOSTEL;; 4: len 6; hex 4d4f5354454c; asc MOSTEL;; 5: len 4; hex 5afca8c1; asc Z ;; *** WE ROLL BACK TRANSACTION (2)

Under the title "LATEST DETECTED DEADLOCK", we can see details of our deadlock.

To see the detail of the deadlock in the mysql error log, we must enable the option innodb_print_all_deadlocks in our database.

mysql> set global innodb_print_all_deadlocks=1; Query OK, 0 rows affected (0.00 sec)

MySQL Log Error:

2018-05-17T18:36:58.341835Z 12 [Note] InnoDB: Transactions deadlock detected, dumping detailed information. 2018-05-17T18:36:58.341869Z 12 [Note] InnoDB: *** (1) TRANSACTION: TRANSACTION 1812, ACTIVE 42 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1 MySQL thread id 11, OS thread handle 140515492943616, query id 8467 localhost root updating UPDATE actor SET last_name='PENELOPE' WHERE actor_id='1' 2018-05-17T18:36:58.341945Z 12 [Note] InnoDB: *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1812 lock_mode X locks rec but not gap waiting Record lock, heap no 204 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0001; asc ;; 1: len 6; hex 000000000713; asc ;; 2: len 7; hex 330000016b0110; asc 3 k ;; 3: len 7; hex 4755494e455353; asc GUINESS;; 4: len 7; hex 4755494e455353; asc GUINESS;; 5: len 4; hex 5afdcb89; asc Z ;; 2018-05-17T18:36:58.342347Z 12 [Note] InnoDB: *** (2) TRANSACTION: TRANSACTION 1811, ACTIVE 65 sec starting index read, thread declared inside InnoDB 5000 mysql tables in use 1, locked 1 3 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1 MySQL thread id 12, OS thread handle 140515492677376, query id 9075 localhost root updating UPDATE actor SET last_name='GRACE' WHERE actor_id='7' 2018-05-17T18:36:58.342409Z 12 [Note] InnoDB: *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1811 lock_mode X locks rec but not gap Record lock, heap no 204 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0001; asc ;; 1: len 6; hex 000000000713; asc ;; 2: len 7; hex 330000016b0110; asc 3 k ;; 3: len 7; hex 4755494e455353; asc GUINESS;; 4: len 7; hex 4755494e455353; asc GUINESS;; 5: len 4; hex 5afdcb89; asc Z ;; 2018-05-17T18:36:58.342793Z 12 [Note] InnoDB: *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1811 lock_mode X locks rec but not gap waiting Record lock, heap no 205 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0007; asc ;; 1: len 6; hex 000000000714; asc ;; 2: len 7; hex 340000016c0110; asc 4 l ;; 3: len 6; hex 4d4f5354454c; asc MOSTEL;; 4: len 6; hex 4d4f5354454c; asc MOSTEL;; 5: len 4; hex 5afdcba0; asc Z ;; 2018-05-17T18:36:58.343105Z 12 [Note] InnoDB: *** WE ROLL BACK TRANSACTION (2)

Taking into account what we have learned above about why deadlocks happen, you can see that there is not much we can do on the database side to avoid them. Anyway, as DBAs it is our duty to actually catch them, analyze them, and provide feedback to the developers.

The reality is that these errors are particular to each application, so you will need to check them one by one and there is not guide to tell you how to troubleshoot this. Keeping this in mind, there are some things you can look for.

Search for long running transactions. As the locks are usually held until the end of a transaction, the longer the transaction , the longer the locks over the resources. If it is possible, try to split long running transactions into smaller/faster ones.

Sometimes it is not possible to actually split the transactions, so the work should focus on trying to execute those operations in a consistent order each time, so transactions form well-defined queues and do not deadlock.

One workaround that you can also propose is to add retry logic into the application (of course, try to solve the underlying issue first) in a way that, if a deadlock happens, the application will to run the same commands again.

Check the isolation levels used, sometimes you try by changing them. Look for commands like SELECT FOR UPDATE, and SELECT FOR SHARE, as they generate explicit locks, and evaluate if they are really needed or you can work with an older snapshot of the data. One thing you can try if you cannot remove these commands is using a lower isolation level such as READ COMMITTED.

Of course, always add well-chosen indexes to your tables. Then your queries need scan fewer index records and consequently set fewer locks.

Related resources  Become a MySQL DBA  Become a PostgreSQL DBA

On a higher level, as a DBA you can take some precautions to minimize locking in general. For naming one example, in this case for PostgreSQL, you can avoid adding a default value in the same command that you will add a column. Altering a table will get a really aggressive lock, and setting a default value for it will actually update the existing rows that have null values, making this operation take really long. So if you split this operation into several commands, adding the column, adding the default, updating the null values, you will minimize the locking impact.

Of course there are tons of tips like this that the DBAs get with the practice (creating indexes concurrently, create the pk index separately before adding the pk,and so on), but the important thing is to learn and understand this "way of thinking" and always to minimize the lock impact of the operations we are doing.

Tags:  deadlock locking PostgreSQL MySQL
Categories: Web Technologies

全国のOTNおすすめ技術者向けセミナー&イベント(2018年6月)

Planet MySQL - Mon, 05/21/2018 - 00:16
【東京】MySQL 8.0 新機能紹介セミナー
オプティマイザー&パフォーマンス

2018年6月1日 1:30 PM - 4:00 PM @日本オラクル 本社(東京・外苑前)

このイベントでは、MySQL 8.0の新機能の中から、オプティマイザーとパフォーマンス/スケーラビリティ改善に関係するトピックをご紹介させて頂きます。

このイベントに参加する利点
・MySQL 8.0のオプティマイザーの改善点/新機能について知る
・MySQL 8.0のパフォーマンスやスケーラビリティの改善点/新機能について知る

【大阪】Oracle Journey to Cloud セミナー
~クラウド時代における最高のデータベース基盤構築の秘訣~

2018年6月27日 2:00 PM - 4:30 PM @日本オラクル関西オフィス(大阪)

2017年秋に発表された、世界初の自律型データベース・クラウドの構想をはじめ最新版Oracle Databaseの最新情報とクラウド時代におけるデータベース基盤の最適化についてご紹介します。 さらにその最新データベース基盤を支えるためのプラットフォーム選択のポイントや導入事例をご紹介いたします。

Categories: Web Technologies

全国のOTNおすすめ技術者向けセミナー&イベント(2018年6月)

Planet MySQL - Mon, 05/21/2018 - 00:16
【東京】MySQL 8.0 新機能紹介セミナー
オプティマイザー&パフォーマンス

2018年6月1日 1:30 PM - 4:00 PM @日本オラクル 本社(東京・外苑前)

このイベントでは、MySQL 8.0の新機能の中から、オプティマイザーとパフォーマンス/スケーラビリティ改善に関係するトピックをご紹介させて頂きます。

このイベントに参加する利点
・MySQL 8.0のオプティマイザーの改善点/新機能について知る
・MySQL 8.0のパフォーマンスやスケーラビリティの改善点/新機能について知る

【大阪】Oracle Journey to Cloud セミナー
~クラウド時代における最高のデータベース基盤構築の秘訣~

2018年6月27日 2:00 PM - 4:30 PM @日本オラクル関西オフィス(大阪)

2017年秋に発表された、世界初の自律型データベース・クラウドの構想をはじめ最新版Oracle Databaseの最新情報とクラウド時代におけるデータベース基盤の最適化についてご紹介します。 さらにその最新データベース基盤を支えるためのプラットフォーム選択のポイントや導入事例をご紹介いたします。

Categories: Web Technologies

error: Failed dependencies: pkgconfig(openssl) is needed by mysql-commercial-devel-8.0.11-1.1.el7.x86_64

Planet MySQL - Sun, 05/20/2018 - 01:19

Recently we were evaluating MySQL 8 Enterprise for a customer. During our installation (our MySQL Enterprise Edition installations are always RPM based) using RPM file (CentOS Linux release 7.4.1708 (Core))  we ended up in a very unusual error, “error: Failed dependencies:pkgconfig(openssl) is needed by mysql-commercial-devel-8.0.11-1.1.el7.x86_64” , This was happening more specifically during the installation of “mysql-commercial-devel-8.0.11-1.1.el7.x86_64.rpm” . So decided to write this post about the error and how we fixed it.

[root@localhost MySQL8-Enterprise-RPM]# rpm -ivh mysql-commercial-server-8.0.11-1.1.el7.x86_64.rpm mysql-commercial-client-8.0.11-1.1.el7.x86_64.rpm mysql-commercial-libs-8.0.11-1.1.el7.x86_64.rpm mysql-commercial-common-8.0.11-1.1.el7.x86_64.rpm mysql-commercial-devel-8.0.11-1.1.el7.x86_64.rpm warning: mysql-commercial-server-8.0.11-1.1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY error: Failed dependencies: pkgconfig(openssl) is needed by mysql-commercial-devel-8.0.11-1.1.el7.x86_64

After some research, we found “openssl-devel” installation will address the dependency issues, Actually “openssl-devel” is a prerequisite for successful installation of “mysql-commercial-devel-8.0.11-1.1.el7.x86_64.rpm

[root@localhost MySQL8-Enterprise-RPM]# yum install openssl-devel Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.vbctv.in * extras: mirror.vbctv.in * updates: mirror.vbctv.in Resolving Dependencies --> Running transaction check ---> Package openssl-devel.x86_64 1:1.0.2k-12.el7 will be installed --> Processing Dependency: zlib-devel(x86-64) for package: 1:openssl-devel-1.0.2k-12.el7.x86_64 --> Processing Dependency: krb5-devel(x86-64) for package: 1:openssl-devel-1.0.2k-12.el7.x86_64 --> Running transaction check Installed: openssl-devel.x86_64 1:1.0.2k-12.el7 Dependency Installed: keyutils-libs-devel.x86_64 0:1.5.8-3.el7 krb5-devel.x86_64 0:1.15.1-19.el7 libcom_err-devel.x86_64 0:1.42.9-12.el7_5 libkadm5.x86_64 0:1.15.1-19.el7 libselinux-devel.x86_64 0:2.5-12.el7 libselinux-python.x86_64 0:2.5-12.el7 libselinux-utils.x86_64 0:2.5-12.el7 libsepol.x86_64 0:2.5-8.1.el7 libss.x86_64 0:1.42.9-12.el7_5 Complete!

 

[root@localhost MySQL8-Enterprise-RPM]# rpm -ivh mysql-commercial-devel-8.0.11-1.1.el7.x86_64.rpm warning: mysql-commercial-devel-8.0.11-1.1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY Preparing... ################################# [100%] Updating / installing... 1:mysql-commercial-devel-8.0.11-1.1################################# [100%]

Confirm all packages installed for an successful MySQL operations:

[root@localhost MySQL8-Enterprise-RPM]# rpm -qa | grep mysql mysql-commercial-common-8.0.11-1.1.el7.x86_64 mysql-commercial-client-8.0.11-1.1.el7.x86_64 mysql-commercial-libs-compat-8.0.11-1.1.el7.x86_64 mysql-commercial-devel-8.0.11-1.1.el7.x86_64 mysql-commercial-libs-8.0.11-1.1.el7.x86_64 mysql-commercial-server-8.0.11-1.1.el7.x86_64 [root@localhost MySQL8-Enterprise-RPM]#

The post error: Failed dependencies: pkgconfig(openssl) is needed by mysql-commercial-devel-8.0.11-1.1.el7.x86_64 appeared first on MySQL Consulting, Support and Remote DBA Services By MinervaDB.

Categories: Web Technologies

How to set up MySQL InnoDB Cluster? Part One

Planet MySQL - Sat, 05/19/2018 - 11:09

This post is about setting up MySQL InnoDB Cluster with 5 nodes on a sandbox deployment.  Here, we focus on implementation part, the core concepts will be explained in separate posts.


Prerequisites:
  • MySQL Engine
  • MySQL Shell
  • MySQL Router
Deploying MySQL InnoDB Cluster involves the following steps:
  • Deploying MySQL Engine (Sandbox Instance)
  • Creating an InnoDB Cluster
  • Adding nodes to InnoDB Cluster
  • Configuring MySQL Router for High Availability.
  • Testing High Availability.

Deploying MySQL Engine:

If the MySQL engines are already installed on all the nodes, you can skip this step and directly move into creating an InnoDB Cluster part.


I am deploying 5 Sandbox instances (which is in-built on MySQL Shell application) on a same machine. On production system, there will be separate nodes for each MySQL Engines. Let’s begin with the deployments:


To open MySQL Shell     : Start -> cmd -> Type mysqlsh (OR) Start -> MySQL Shell


To change script mode  : \JS – JavaScript Mode | \PY – Python Mode | \SQL – SQL Mode


MySQL JS > dba.deploySandboxInstance(port)


deploySandboxInstance()module will deploy new Sandbox Instance on the mentioned port, let’s deploy the following 5 Sandbox instances:


dba.deploySandboxInstance (3307)

dba.deploySandboxInstance (3308)

dba.deploySandboxInstance (3309)

dba.deploySandboxInstance (3310)

dba.deploySandboxInstance (3311)


Sample Output:


MySQL JS > dba.deploySandboxInstance (3307)

A new MySQL sandbox instance will be created on this host in

C:\Users\rathish.kumar\MySQL\mysql-sandboxes\3307

Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks.

Please enter a MySQL root password for the new instance: ***

Deploying new MySQL instance...

Instance localhost: 3307 successfully deployed and started.

Use shell.connect('root@localhost:3307'); to connect to the instance.

MySQL JS >


To connect the deployed sandbox instance:


MySQL JS > \connect user@host:portand enter the password when prompted. (OR)

MySQL JS > shell.connect(‘user@host:port’)


Sample Output:


MySQL localhost: 3307 ssl JS > \connect root@localhost:3307

Creating a session to 'root localhost: 3307’

Enter password: ***

Fetching schema names for auto completion... Press ^C to stop.

Closing old connection...

Your MySQL connection id is 16

Server version: 8.0.11 MySQL Community Server - GPL

No default schema selected; type \use to set one.

MySQL localhost: 3307 ssl JS > \ssl

Switching to SQL mode... Commands end with;

MySQL localhost: 3307 ssl SQL > select @@port;

+--------+

| @@port |

+--------+

|   3307 |

+--------+

1 row in set (0.0006 sec)

MySQL localhost: 3307 ssl SQL >


Creating InnoDB Cluster:


To create an InnoDB cluster, connect to seed (primary) server, which contains the original data by using above method and follow the below steps:
var cluster = dba.createCluster('ClusterName')
Sample Output:
MySQL localhost:3307 ssl  JS > var cluster = dba.createCluster('DBCluster') A new InnoDB cluster will be created on instance 'root@localhost:3307'. Validating instance at localhost:3307...Instance detected as a sandbox.

Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as L-IS-RATHISH

Instance configuration is suitable.

Creating InnoDB cluster 'DBCluster' on 'root@localhost:3307'...

Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.

At least 3 instances are needed for the cluster to be able to withstand up to one server failure.
Adding nodes to InnoDB Cluster:
The secondary replication nodes will be added to cluster by using the addInstance() method.


mysql-js> cluster.addInstance('user@host:port')


Let us add the nodes, one by one:


cluster.addInstance('root@localhost:3308');

cluster.addInstance('root@localhost:3309');

cluster.addInstance('root@localhost:3310');

cluster.addInstance('root@localhost:3311');
Sample Output:
MySQL  localhost:3307 ssl  JS > cluster.addInstance('root@localhost:3311');

A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'root@localhost:3311': ***

Adding instance to the cluster ...

Validating instance at localhost:3311...

Instance detected as a sandbox.

Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as L-IS-RATHISH

Instance configuration is suitable.

The instance 'root@localhost:3311' was successfully added to the cluster.
Configuring MySQL Router for High Availability:
MySQL Router routes client connections to servers in the cluster and it provides separate ports for Read and Read/Write operations.


MySQL Router takes its configuration from InnoDB Cluster’s metadata and configure itself by using –-bootstrap option. It is recommended to install MySQL Router on a separate server or can be installed on the application server.


The MySQL Router command is given below, this should be run on the server with Read/Write (R/W) role.


shell> mysqlrouter --bootstrap user@host:port


The server roles can be checked by using the status() method. Let us check the status of our cluster:


MySQL  localhost:3307 ssl  JS > cluster.status()

{

    "clusterName": "DBCluster",

    "defaultReplicaSet": {

        "name": "default",

        "primary": "localhost:3307",

        "ssl": "REQUIRED",

        "status": "OK",

        "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.",

        "topology": {

            "localhost:3307": {

                "address": "localhost:3307",

                "mode": "R/W",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "localhost:3308": {

                "address": "localhost:3308",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "localhost:3309": {

                "address": "localhost:3309",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "localhost:3310": {

                "address": "localhost:3310",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "localhost:3311": {

                "address": "localhost:3311",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            }

        }

    },

    "groupInformationSourceMember": "mysql://root@localhost:3307"

}

 MySQL  localhost:3307 ssl  JS >


The server root@localhost:3307 is currently assigned with R/W role. Configure MySQL Router on this server:


C:\Windows\system32>mysqlrouter --bootstrap root@localhost:3307

Please enter MySQL password for root:

Reconfiguring system MySQL Router instance...

WARNING: router_id 1 not found in metadata

MySQL Router has now been configured for the InnoDB cluster 'DBCluster'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'DBCluster':

- Read/Write Connections: localhost:6446

- Read/Only Connections: localhost:6447

X protocol connections to cluster 'DBCluster':

- Read/Write Connections: localhost:64460

- Read/Only Connections: localhost:64470

Existing configurations backed up to 'C:/Program Files/MySQL/MySQL Router 8.0/mysqlrouter.conf.bak'


Connecting InnoDB Cluster:


From MySQL Router configuration, we get the connection information, by default, port 6446 used for Read /Write connections and Port 6447 used for Read/Only connections. MySQL Router allows to configure custom port numbers for R/W and R/O client connections.


Let us connect to first connect to Read/Write port and then connect to Read/Only port for testing.


Read/Write Instance:


C:\Users\rathish.kumar>mysql -u root -h localhost -P6446 -p

Enter password: *

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 176

Server version: 8.0.11 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> select @@port;

+--------+

| @@port |

+--------+

|   3307 |

+--------+

1 row in set (0.00 sec)

mysql> create database ClustDB;

Query OK, 1 row affected (0.09 sec)

mysql> use ClustDB;

Database changed

mysql> create table t1 (id int auto_increment primary key);

Query OK, 0 rows affected (0.18 sec)

mysql> insert into t1 (id) values(1);

Query OK, 1 row affected (0.06 sec)


Read/Only Instance:


C:\Users\rathish.kumar>mysql -u root -h localhost -P6447 -p

Enter password: *

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 47

Server version: 8.0.11 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> select @@port;

+--------+

| @@port |

+--------+

|   3308 |

+--------+

1 row in set (0.00 sec)

mysql> select * from ClustDB.t1;

+----+

| id |

+----+

|  1 |

+----+

1 row in set (0.00 sec)

mysql> insert into ClustDB.t1 (id) values (2);

ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement

mysql>


Testing High Availability:


We have connected to R/W and R/O instances, and it is working as expected. Now let’s test the High Availability by killing primary seed node (3307) and Read/Only instance (3308).


dba.killSandboxInstance(3307)

dba.killSandboxInstance(3308)


Sample output:


MySQL  localhost:3307 ssl  JS > dba.killSandboxInstance(3307);

The MySQL sandbox instance on this host in

C:\Users\rathish.kumar\MySQL\mysql-sandboxes\3307 will be killed

Killing MySQL instance...

Instance localhost:3307 successfully killed.


Now refresh run the query on the existing Read/Write and Read/Only connections and check the port:          


Read/Only Instance:


mysql> select @@port;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

Connection id:    38

Current database: *** NONE ***

+--------+

| @@port |

+--------+

|   3310 |

+--------+

1 row in set (1.30 sec)

mysql>


This error is due to connection rerouting while we are still connected to server. This error will not occur on new connections. Let us try with Read/Write connections:


Read/Write Instance:


C:\Users\rathish.kumar>mysql -u root -h localhost -P6446 -p

Enter password: *

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 32

Server version: 8.0.11 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> select @@port;

+--------+

| @@port |

+--------+

|   3311 |

+--------+

1 row in set (0.00 sec)

mysql>


There is no changes required from applications, the InnoDB Cluster will identify the changes and automatically configure itself and high availability achieved with the help of MySQL Router.


I suggest you to test InnoDB Cluster on lab environment and share your findings on comment section for other readers. I will be coming with other articles on working with InnoDB Cluster and Troubleshooting InnoDB Cluster. Need of any assistance on InnoDB Cluster, please share it on comment section.


Categories: Web Technologies

Presentation : MySQL Timeout Variables Explained

Planet MySQL - Fri, 05/18/2018 - 23:56

MySQL has multiple timeout variables these slides helps to give an overview of the different  timeout variables and their purposes briefly.

Categories: Web Technologies

Percona Server for MySQL 5.5.60-38.12 Is Now Available

Planet MySQL - Fri, 05/18/2018 - 15:12

Percona announces the release of Percona Server for MySQL 5.5.60-38.12 on May 18, 2018. Based on MySQL 5.5.60, including all the bug fixes in it, Percona Server for MySQL 5.5.60-38.12 is now the current stable release in the 5.5 series.

Percona Server for MySQL is open-source and free. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed
  • mysqldump utility with --innodb-optimize-keys option was incorrectly working with foreign keys pointing to the same table, producing invalid SQL statements. Bugs fixed #1125 and #3863.
  • A typo in plugin.cmake file prevented to compile plugins statically into the server. Bug fixed #3871 (upstream #89766).
  • Using -DWITHOUT_<PLUGIN>=ON CMake variable to exclude a plugin from the build didn’t work for some plugins, including a number of storage engines. Bug fixed #3901.
  • A fix was introduced to remove GCC 8 compilation warnings for the Percona Server build. Bug fixed #3950.
  • A code clean-up was done to fix compilation warnings and errors specific for clang 6. Bug fixed #3893 (upstream #90111).
  • Percona Server Debian packages description included reference to /etc/mysql/my.cnf file, which is not actually present in these packages. Bug fixed #2046.
  • A clean-up in Percona Server binlog-related code was made to avoid uninitialized memory comparison. Bug fixed #3925 (upstream #90238).
  • The temporary file I/O was not instrumented for Performance Schema. Bug fixed #3937 (upstream #90264).A key_block_size value was set automatically by the Improved MEMORY Storage Engine, which resulted in warnings when changing the engine type to InnoDB, and constantly growing key_block_size during alter operations. Bugs fixed  #3936, #3940, and #3943.

Other bugs fixed: #3767 “Fix compilation warnings/errors with clang”, #3778 “5.5 Tree received Percona-TokuBackup submodule where it should not”, #3794 “MTR test main.percona_show_temp_tables_stress does not wait for events to start”,  #3798 “MTR test innodb.percona_extended_innodb_status fails if InnoDB status contains unquoted special characters”, and #3926 “Potentially truncated bitmap file name in log_online_open_bitmap_file_read_only() (storage/innobase/log/log0online.cc)”.

Find the release notes for Percona Server for MySQL 5.5.60-38.12 in our online documentation. Report bugs in the Jira bug tracker.

The post Percona Server for MySQL 5.5.60-38.12 Is Now Available appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Cloud Disaster Recovery for MariaDB and MySQL

Planet MySQL - Fri, 05/18/2018 - 05:17

MySQL has a long tradition in geographic replication. Distributing clusters to remote data centers reduces the effects of geographic latency by pushing data closer to the user. It also provides a capability for disaster recovery. Due to the significant cost of duplicating hardware in a separate site, not many companies were able to afford it in the past. Another cost is skilled staff who is able to design, implement and maintain a sophisticated multiple data centers environment.

With the Cloud and DevOps automation revolution, having distributed datacenter has never been more accessible to the masses. Cloud providers are increasing the range of services they offer for a better price.One can build cross-cloud, hybrid environments with data spread all over the world. One can make flexible and scalable DR plans to approach a broad range of disruption scenarios. In some cases, that can just be a backup stored offsite. In other cases, it can be a 1 to 1 copy of a production environment running somewhere else.

Related blog posts  Become a ClusterControl DBA: Safeguarding your Data  How to do Point-in-Time Recovery of MySQL & MariaDB Data using ClusterControl  Zero Downtime Network Migration with MySQL Galera Cluster using Relay Node

In this blog we will take a look at some of these cases, and address common scenarios.

Storing Backups in the Cloud

A DR plan is a general term that describes a process to recover disrupted IT systems and other critical assets an organization uses. Backup is the primary method to achieve this. When a backup is in the same data center as your production servers, you risk that all data may be wiped out in case you lose that data center. To avoid that, you should have the policy to create a copy in another physical location. It's still a good practice to keep a backup on disk to reduce the time needed to restore. In most cases, you will keep your primary backup in the same data center (to minimize restore time), but you should also have a backup that can be used to restore business procedures when primary datacenter is down.

ClusterControl: Upload Backup to the cloud

ClusterControl allows seamless integration between your database environment and the cloud. It provides options for migrating data to the cloud. We offer a full combination of database backups for Amazon Web Services (AWS), Google Cloud Services or Microsoft Azure. Backups can now be executed, scheduled, downloaded and restored directly from your cloud provider of choice. This ability provides increased redundancy, better disaster recovery options, and benefits in both performance and cost savings.

ClusterControl: Managing Cloud Credentials

The first step to set up "data center failure - proof backup" is to provide credentials for your cloud operator. You can choose from multiple vendors here. Let's take a look at the process set up for the most popular cloud operator - AWS.

ClusterControl: adding cloud credentials

All you need is the AWS Key ID and the secret for the region where you want to store your backup. You can get that from AWS console. You can follow a few steps to get it.

  1. Use your AWS account email address and password to sign in to the AWS Management Console as the AWS account root user.
  2. On the IAM Dashboard page, choose your account name in the navigation bar, and then select My Security Credentials.
  3. If you see a warning about accessing the security credentials for your AWS account, choose to Continue to Security Credentials.
  4. Expand the Access keys (access key ID and secret access key) section.
  5. Choose to Create New Access Key. Then choose Download Key File to save the access key ID and secret access key to a file on your computer. After you close the dialog box, you will not be able to retrieve this secret access key again.
ClusterControl: Hybrid cloud backup

When all is set, you can adjust your backup schedule and enable backup to cloud option. To reduce network traffic make sure to enable data compression. It makes backups smaller and minimizes the time needed for upload. Another good practice is to encrypt the backup. ClusterControl creates a key automatically and uses it if you decide to restore it. Advanced backup policies should have different keep times for backups stored on servers in the same datacenter, and the backups stored in another physical location. You should set a more extended retention period for cloud-based backups, and shorter period for backups stored near the production environment, as the probability of restore drops with the backup lifetime.

ClusterControl: backup retention policy Extend your cluster with asynchronous replication

Galera with asynchronous replication can be an excellent solution to build an active DR node in a remote data center. There are a few good reasons to attach an asynchronous slave to a Galera Cluster. Long-running OLAP type queries on a Galera node might slow down a whole cluster. With delay apply option, delayed replication can save you from human errors so all those golden enters will be not immediately applied to your backup node.

ClusterControl: delayed replication

In ClusterControl, extending a Galera node group with asynchronous replication is done in a single page wizard. You need to provide the necessary information about your future or existing slave server. The slave will be set up from an existing backup, or a freshly streamed XtraBackup from the master to the slave.

Load balancers in multi-datacenter

Load balancers are a crucial component in MySQL and MariaDB database high availability. It’s not enough to have a cluster spanning across multiple data centers. You still need your services to access them. A failure of a load balancer that is available in one data center will make your entire environment unreachable.

Web proxies in cluster environment

One of the popular methods to hide the complexity of the database layer from an application is to use a proxy. Proxies act as an entry point to the databases, they track the state of the database nodes and should always direct traffic to only the nodes that are available. ClusterControl makes it easy to deploy and configure several different load balancing technologies for MySQL and MariaDB, including ProxySQL, HAProxy, with a point-and-click graphical interface.

ClusterControl: load balancer HA Related webinar  How to Get Started with Open Source Database Management

It also allows making this component redundant by adding keepalived on top of it. To prevent your load balancers from being a single point of failure, one would set up two identical (one active and one in different DC as standby) HAProxy, ProxySQL or MariaDB Maxscale instances and use Keepalived to run Virtual Router Redundancy Protocol (VRRP) between them. VRRP provides a Virtual IP address to the active load balancer and transfers the Virtual IP to the standby HAProxy in case of failure. It is seamless because the two proxy instances need no shared state.

Of course, there are many things to consider to make your databases immune to data center failures.
Proper planning and automation will make it work! Happy Clustering!

Tags:  MySQL MariaDB cloud disaster recovery
Categories: Web Technologies

Pages