emGee Software Solutions Custom Database Applications

Share this

Web Technologies

What is the MEAN stack? JavaScript web applications

InfoWorld JavaScript - Thu, 11/15/2018 - 03:00

Most anyone who has developed web applications knows the acronym LAMP, which is used to describe web stacks made with Linux, Apache (web server), MySQL (database server), and PHP, Perl, or Python (programming language).

Another web-stack acronym has come to prominence in the last few years: MEAN—signifying a stack that uses MongoDB (database server), Express (server-side JavaScript framework), Angular (client-side JavaScript framework), and Node.js (JavaScript runtime).

[ Getting to know Node? Don’t miss: Node.js tutorial: Get started with Node.js10 JavaScript concepts every Node developer must master.The complete guide to Node.js frameworks7 keys to structuring your Node app. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ]

MEAN is one manifestation of the rise of JavaScript as a “full-stack development” language. Node.js provides a JavaScript runtime on the server; Angular and Express are JavaScript frameworks used to build web clients and Node.js applications, respectively; and MongoDB’s data structures are stored in a binary JSON (JavaScript Object Notation) format, while its queries are expressed in JSON.

To read this article in full, please click here

Categories: Web Technologies

MySQL X DevAPI Connection Pool with Connector/Python

Planet MySQL - Thu, 11/15/2018 - 00:04

If you have an application that need to use multiple connections to the MySQL database for short periods of times, it can be a good to use a connection pool to avoid creating a new connection and going through the whole authentication process every time a connection is needed. For the Python Database API (PEP249), MySQL Connector/Python has had support for connection pools for a long time. With the release of MySQL 8.0.13, the new X DevAPI also has support for connection pools.

This blog will first cover the background of the X DevAPI connection pool feature in MySQL Connector/Python. Then provide an example.

Background

You create a connection pool using the mysqlx.get_client() function. You may wonder why you are creating a client and not a pool? As will be shown later, there is a little more to this feature than just a connection pool. So, it makes sense to use a more generic term.

The get_client() function takes two arguments: The connection options and the client options. The connection options are the usual arguments defining which MySQL instance to connect to, authentication related options, how to connect, etc. The client options are the interesting ones in the discussion of a connection pool.

The client options is a dictionary or a JSON document written as a string. Currently, the only supported client options are the ones defining the connection pool. These are specified under the pooling field (and example will be provided shortly). This leaves room for the possibility to expand get_client() later with other features than a connection pool.

There are currently four connection pool options:

  • enabled: Whether the connection pool is enabled. The default is True.
  • max_size: The maximum number of connections that can be in the pool. The default is 25.
  • max_idle_time: How long time in milliseconds a connection can be idle before it is closed. The default is 0 which means “infinite” (in practice 2147483000 milliseconds).
  • queue_timeout: The maximum amount of time in milliseconds that an attempt to get a connection from the pool will block. If no connections have become available before the timeout, a mysqlx.errors.PoolError exception is raised. The default is 0 which means “infinite” (in practice 2147483000 milliseconds).

What happens if you disable the connection pool? In that case the client that is returned simply work as a template for connections and you can keep creating connections until MySQL Server runs out of connections. In that case, the session you end up with is a regular connection, so it when you close it, it will disconnect to MySQL.

Back to the case where the connection pool is enabled. Once you have the client object, you can start using the pool. You retrieve a connection from the pool with the get_session() method. No arguments are used. After this you can use the session just as a regular standalone connection. The only difference is that when you close the session, it is returned to the pool rather than disconnected.

Enough background. Let’s see an example.

Example

The following example creates a connection pool with at most two connections. Then two sessions are fetched from the pool and their connection IDs are printed. A third session will be requested before one of the original sessions is returned to the pool. Finally, a session is reused and its connection ID is printed.

import mysqlx from datetime import datetime cnxid_sql = "SELECT CONNECTION_ID() AS ConnectionID" fmt_id = "Connection {0} ID ..........................: {1}" connect_args = { "host": "127.0.0.1", "port": 33060, "user": "pyuser", "password": "Py@pp4Demo", }; client_options = { "pooling": { "enabled": True, "max_size": 2, "max_idle_time": 60000, "queue_timeout": 3000, } } # Create the connection pool pool = mysqlx.get_client(connect_args, client_options) # Fetch two connections (exhausting the pool) # and get the connection ID for each connection1 = pool.get_session() id1_row = connection1.sql(cnxid_sql).execute().fetch_one() print(fmt_id.format(1, id1_row["ConnectionID"])) connection2 = pool.get_session() id2_row = connection2.sql(cnxid_sql).execute().fetch_one() print(fmt_id.format(2, id2_row["ConnectionID"])) # Attempt to get a third connection time = datetime.now().strftime('%H:%M:%S') print("Starting to request connection 3 .........: {0}".format(time)) try: connection3 = pool.get_session() except mysqlx.errors.PoolError as err: print("Unable to fetch connection 3 .............: {0}".format(err)) time = datetime.now().strftime('%H:%M:%S') print("Request for connection 3 completed .......: {0}".format(time)) # Return connection 1 to the pool connection1.close() # Try to get connection 3 again connection3 = pool.get_session() id3_row = connection3.sql(cnxid_sql).execute().fetch_one() print(fmt_id.format(3, id3_row["ConnectionID"])) # Close all connetions pool.close()

The first thing to notice is the client options defined in lines 14-21. In this case all four options are set, but you only need to set those where you do not want the default value. The settings allow for at most two connections in the pool, when requesting a session it is allowed to take at most 3 seconds, and idle sessions should be disconnected after 60 seconds.

In line 24 the connection pool (client) is created and subsequent two sessions are fetched from the pool. When a third session is requested, it will trigger a PoolError exception as the pool is exhausted. Lines 38-42 shows how to handle the exception.

Finally the first connection is returned to the pool and it is possible to get the third request to complete.

An example of the output is (the connection IDs and timestamps will differ from execution to execution):

Connection 1 ID ..........................: 239 Connection 2 ID ..........................: 240 Starting to request connection 3 .........: 18:23:14 Unable to fetch connection 3 .............: pool max size has been reached Request for connection 3 completed .......: 18:23:44 Connection 3 ID ..........................: 241

From the output you can see that the first attempt to fetch connection 3 takes three seconds before it times out and raises the exception – just as specified by the queue_timeout setting.

What may surprise you (at least if you have studied Chapter 5 from MySQL Connector/Python Revealed) from this output is that once connection 1 has been returned to the pool and connection 3 fetches the session again, it has a new connection ID. Does that mean the pool is not working? No, the pool is working alright. However, the X Plugin (the plugin in MySQL Server handling connections using the X Protocol) works differently than the connection handling for the traditional MySQL protocol.

The X Plugin distinguishes between the connection to the application and the thread inside MySQL. So, when the session is returned to the pool and the session is reset (to set the session variables back to the defaults and remove user variables) the thread inside MySQL is removed. As MySQL uses threads, it is cheap to create a new thread as it is needed, so this is not a performance problem. However, the connection to the application is maintained. This means you safe the expensive steps of creating the connection and authenticating, while the threads only actually exists inside MySQL while it is out of the pool.

If you are interested in learning more about MySQL Connector/Python 8 including how to use the X DevAPI, then I am the author of MySQL Connector/Python Revealed (Apress). It is available from Apress, Amazon, and other book stores.

Categories: Web Technologies

MySQL 8.0.13: Change Current Password Policy

Planet MySQL - Wed, 11/14/2018 - 23:30

We have introduced a new policy for you to enforce on your non-privileged users. It requires their current password at the time they set a new password. It is optional and off by default. You can control it globally (for all non-privileged users) or on a per-user basis.…

Categories: Web Technologies

MySQL 8.0.13: Change Current Password Policy

MySQL Server Blog - Wed, 11/14/2018 - 23:30

We have introduced a new policy for you to enforce on your non-privileged users. It requires their current password at the time they set a new password. It is optional and off by default. You can control it globally (for all non-privileged users) or on a per-user basis.…

Categories: Web Technologies

MySQL NDB Cluster row level locks and write scalability

Planet MySQL - Wed, 11/14/2018 - 23:08
MySQL NDB Cluster uses row level locks instead of a single shared commit lock in order to prevent inconsistency in simultaneous distributed transactions. This gives NDB a great advantage over all other MySQL clustering solutions and is one reason behind cluster’s unmatched ability to scale both reads and writes. 
NDB is a transactional data store. The lowest and only isolation level available in NDB is Read Committed. There are no dirty reads in NDB and only committed rows can be read by other transactions. 
All write transactions in NDB will result in exclusive row locks of all individual rows changed during the transaction. Any other transaction is allowed to read any committed row independent of their lock status. Reads are lock-free reads.
The great advantage is that committed reads in NDB never block during writes to the same data and always the latest committed changes are read. A select doesn't block concurrent writes and vice versa. 
This is extremely beneficial for write scalability. No shared global commit synchronization step is needed to ensure transaction consistency across distributed data store instances. Each instance instead handles its own row locks - usually only locking a few out of many rows. Due to NDB’s highly parallel and asynchronous design many rows can be committed in parallel within a distributed instance and across multiple instances. 
As a side effect interleaved reading transactions can read committed rows of write transactions before all rows of that writing transaction are committed. The set of rows returned may represent a partially committed transaction and not a snapshot of a single point in time. Pending transactions never change the state of the data before they are committed. All rows of committed transactions are atomically guaranteed to be network durable and consistent in all distributed instances of the data.
If more consistent reads are needed then read locks used in SELECT... IN SHARE MODE / SELECT .... FOR UPDATE can be used to get a serialized view of a set of rows.

Categories: Web Technologies

MySQL Master Replication Crash Safety Part #2: lagging slaves

Planet MySQL - Wed, 11/14/2018 - 21:59
This is Part #2 of the MySQL Master Replication Crash Safety series.  In the previous post, we explored the consequence of reducing durability on masters with slaves using legacy file+position replication.  The consequences are data inconsistencies with a clear warning sign: the slaves stop replicating and report an error.  In this post, we extend our understanding of the impact of running a
Categories: Web Technologies

How Not to do MySQL High Availability: Geographic Node Distribution with Galera-Based Replication Misuse

Planet MySQL - Wed, 11/14/2018 - 15:00

Let's talk about MySQL high availability (HA) and synchronous replication once more.
It's part of a longer series on some high availability reference architecture solutions over geographically distributed areas.
Part 1: Reference Architecture(s) for High Availability Solutions in Geographic Distributed Scenarios: Why Should I Care?
Part 2: MySQL High Availability On-Premises: A Geographically Distributed Scenario

The Problem

A question I often get from customers is: How do I achieve high availability in case if I need to spread my data in different, distant locations? Can I use Percona XtraDB Cluster? Percona XtraDB Cluster (PXC), mariadb-cluster or MySQL-Galera are very stable and well-known solutions to improve MySQL high availability using an approach based on multi-master data-centric synchronous data replication model. Which means that each data-node composing the cluster MUST see the same data, at a given moment in time. Information/transactions must be stored and visible synchronously on all the nodes at a given time. This is defined as a tightly coupled database cluster. This level of consistency comes with a price, which is that nodes must physically reside close to each other and cannot be geographically diverse. This is by design (in all synchronous replication mechanisms). This also has to be clarified over and over throughout the years. Despite that we still see installations that span across geographic locations, including AWS Regions. We still see some solutions breaking the golden rule of proximity, and trying to break the rules of physics as well. The problem/mistake is not different for solutions based on-premises or in the cloud (for whatever cloud provider). Recently I had to design a couple of customer solutions based on remote geographic locations. In both cases, the customer was misled by an incorrect understanding of how the synchronous solution works, and from a lack of understanding of the network layer. I decided I need to cover this topic again, as I have done previously in Galera geographic replication and Effective way to check network connection in a geographically distributed environment

What Happen When I Put Things on the Network?

Well, let's start with the basics. While light travels at 300 million meters per second, the propagation of the electric fields or electric signaling is slower than that. The real speed depends by the medium used to transmit it. But it can be said that the real speed normally spans from 0% to 99% of light-speed (depending on the transmission medium). This means that in optimal conditions the signal travels at approximately 299.72Km per millisecond, in good/mid condition about half that at 149.86Km per millisecond, and in bad conditions it could be 3Km per millisecond or less. To help you understand, the distance between Rome (Italy) and Mountain View (California) is about 10,062Km. At light-speed it will take 33.54ms. In good conditions (90% of light-speed) the signal will take 37.26ms to reach Mountain View, and in less optimal conditions it can easily double to 74.53 ms. Keep in mind this is the electric field propagation speed: optimal conditions with no interruption, re-routing and so on. Reality will bring all the kind of interruptions, repeaters and routing. All the physics above works as a baseline. On top of this, each human construct adds functionalities, flexibility and (unfortunately) overhead – leading to longer times and slower speeds. The final speed will be different than the simple propagation of the electric fields. It will include the transmission time of complex signaling using ICMP protocol, or even higher delays with the use of a very complex protocol like TCP/IP, which includes handshaking, package rerouting, re-sending and so on. On top of that, when sending things over the internet, we need to realize that it is very improbable we will be the only user sending data over that physical channel. As such, whatever we have “on the road” will need to face bandwidth limitation, traffic congestion and so on. I had described the difference between protocols (ICMP – TCP/IP) here, clarifying how the TCP/IP scenario is very different from using different protocols like ICMP, or the theoretical approach. What it all means is that we cannot trust the theoretical performance. We must move to a more empirical approach. But we must understand the right empirical approach or we will be misled.

An Example

I recently worked on a case where a customer had two data centers (DC) at a distance of approximately 400Km, connected with “fiber channel”. Server1 and Server2 were hosted in the same DC, while Server3 was in the secondary DC. Their ping, with default dimension, to Server3 was ~3ms. Not bad at all, right? We decided to perform some serious tests, running multiple sets of tests with netperf for many days collecting data. We also used the data to perform additional fine tuning on the TCP/IP layer AND at the network provider. The results produced a common (for me) scenario (not so common for them):

 


The red line is the first set of tests BEFORE we optimized. The yellow line is the results after we optimized. The above graph reports the number of transactions/sec (AVG) we could run against the different dimension of the dataset and changing the destination server. The full roundtrip was calculated. It is interesting to note that while the absolute numbers were better in the second (yellow) tests, this was true only for a limited dataset dimension. The larger the dataset, the higher the impact. This makes sense if you understand how the TCP/IP stack works (the article I mentioned above explains it). But what surprised them were the numbers. Keeping aside the extreme cases and focusing instead on the intermediate case, we saw that shifting from a 48k dataset dimension to 512K hugely dropped the performance. The drop for executed transactions was from 2299 to 219 on Server2 (same dc) and from 1472 to 167 Server3 (different DC). Also, note that Server3 only managed ~35% fewer transactions comparing to Server2 from the start given the latency. Latency moved from a more than decent 2.61ms to 27.39ms for Server2 and 4.27ms to 37.25ms for Server3.

 


37ms latency is not very high. If that had been the top limit, it would have worked. But it was not. In the presence of the optimized channel, with fiber and so on, when the tests were hitting heavy traffic, the congestion was such to compromise the data transmitted. It hit a latency >200ms for Server3. Note those were spikes, but if you are in the presence of a tightly coupled database cluster, those events can become failures in applying the data and can create a lot of instability. Let me recap a second the situation for Server3: We had two datacenters.

  • The connection between the two was with fiber
  • Distance Km ~400, but now we MUST consider the distance to go and come back. This because in case of real communication, we have not only the send, but also the receive packages.
  • Theoretical time at light-speed =2.66ms (2 ways)
  • Ping = 3.10ms (signal traveling at ~80% of the light speed) as if the signal had traveled ~930Km (full roundtrip 800 Km)
  • TCP/IP best at 48K = 4.27ms (~62% light speed) as if the signal had traveled ~1,281km
  • TCP/IP best at 512K =37.25ms (~2.6% light speed) as if the signal had traveled ~11,175km

Given the above, we have from ~20%-~40% to ~97% loss from the theoretical transmission rate. Keep in mind that when moving from a simple signal to a more heavy and concurrent transmission, we also have to deal with the bandwidth limitation. This adds additional cost. All in only 400Km distance. This is not all. Within the 400km we were also dealing with data congestions, and in some cases the tests failed to provide the level of accuracy we required due to transmission failures and too many packages retry. For comparison, consider Server2 which is in the same DC of Server1. Let see:

  • Ping = 0.027ms that is as if the signal had traveled ~11km light-speed
  • TCP/IP best at 48K = 2.61ms as if traveled for ~783km
  • TCP/IP best at 512K =27.39ms as if traveled for ~8,217km
  • We had performance loss, but the congestion issue and accuracy failures did not happen.

You might say, "But this is just a single case, Marco, you cannot generalize from this behavior!" You would be right IF that were true (but is not).
The fact is, I have done this level of checks many times and in many different environments. On-premises or using the cloud. Actually, in the cloud (AWS), I had even more instability.
The behavior stays the same. Please test it yourself (it is not difficult to use netperf).

Just do the right tests with RTT and multiple requests (note at the end of the article).
Anyhow, what I know from the tests is that when working INSIDE a DC with some significant overhead due to the TCP/IP stack (and maybe wrong cabling), I do not encounter the same congestion or bandwidth limits I have when dealing with an external DC.
This allows me to have more predictable behavior and tune the cluster accordingly.

Tuning that I cannot do to cover the transmission to Server3 because of unpredictable packages behavior and spikes. >200ms is too high and can cause delivery failures.
If we apply the given knowledge to the virtually-synchronous replication we have with Galera (Percona XtraDB Cluster), we can identify that we are hitting the problems well-explained in Jay's article Is Synchronous Replication right for your app?  There, he explains Callaghan’s Law: [In a Galera cluster] a given row can’t be modified more than once per RTT.
On top of that, when talking of geographical disperse solutions we have the TCP/IP magnifying the effect at writeset transmission/latency level.
This causes nodes NOT residing on the same physical contiguous network delay for all the certification-commit phases for an X amount of time. When X is predictable, it may range between 80% - 3% of the light speed for the given distance.
But you can't predict the transmission-time of a set of data split into several datagrams, then sent on the internet, when using TCP/IP.
So we cannot use the X range as a trustable measure. The effect is unpredictable delay, and this is read as a network issue from Galera.
The node can be evicted from the cluster. Which is exactly what happens, and what we experience when dealing with some “BAD” unspecified network issue.

 This means that whenever we need to use a solution based on tightly coupled database cluster (like PXC), we cannot locate our nodes at a distance that is longer than the largest RTT time of our shortest desired period of commit.
If our application must apply the data in a maximum of 200ms in one of its functions, our min RTT is 2ms and our max RTT is 250ms.
We cannot use this solution, period. To be clear, locating a node on another geolocation, and as such use the internet to transmit/receive data, is by default a NO GO given the unpredictability of that network link.

  I doubt that nowadays we have many applications that can wait an unpredictable period to commit their data.
The only case when having a node geographically distributed is acceptable is if you accept commits happening in undefined periods of time and with possible failures.

What Is the Right Thing To Do?

The right solution is easier than the wrong one, and there are already tools in place to make it work efficiently. Say you need to define your HA solution between the East and West Coast, or between Paris and Frankfurt. First of all, identify the real capacity of your network in each DC.
Then build a tightly coupled database cluster in location A and another tightly coupled database cluster in the other location B. Then link them using ASYNCHRONOUS replication.
Finally, use a tool like Replication Manager for Percona XtraDB Cluster to automatically manage asynchronous replication failover between nodes.
On top of all of that use a tool like ProxySQL to manage the application requests.
The full architecture is described here. 

 

Conclusions

The myth of using ANY solution based on tightly coupled database cluster on distributed geographic locations is just that: a myth. It is conceptually wrong and practically dangerous.

It MIGHT work when you set it up, it MIGHT work when you test it, it MIGHT even work for some time in production.

  By definition, it will break, and it will break when it is least convenient. It will break in an unpredictable moment, but because of a predictable reason.

You did the wrong thing by following a myth. Whenever you need to distribute your data over different geographic locations, and you cannot rely on a single physical channel (fiber) to connect the two locations, use asynchronous replication, period!

References

https://github.com/y-trudeau/Mysql-tools/tree/master/PXC
http://www.tusacentral.net/joomla/index.php/mysql-blogs/164-effective-way-to-check-the-network-connection-when-in-need-of-a-geographic-distribution-replication-.html
https://www.percona.com/blog/2013/05/14/is-synchronous-replication-right-for-your-app/

Sample test #!/bin/bash test_log=/tmp/results_$(date +'%Y-%m-%d_%H_%M_%S').txt exec 9>>"$test_log" exec 2>&9 exec 1>&9 echo "$(date +'%Y-%m-%d_%H_%M_%S')" >&9 for ip in 11 12 13; do echo " ==== Processing server 10.0.0.$ip === " for size in 1 48 512 1024 4096;do echo " --- PING ---" ping -M do -c 5 10.0.0.$ip -s $size echo " ---- Record Size $size ---- " netperf -H 10.0.0.$ip -4 -p 3307 -I 95,10 -i 3,3 -j -a 4096 -A 4096 -P 1 -v 2 -l 20 -t TCP_RR -- -b 5 -r ${size}K,48K -s 1M -S 1M echo " ---- ================= ---- "; done echo " ==== ----------------- === "; done
Categories: Web Technologies

MySQL High Availability On-Premises: A Geographically Distributed Scenario

Planet MySQL - Wed, 11/14/2018 - 15:00


MySQL High Availability. Shutterstock.com[/caption] In this article, we'll look at an example of an on-premises, geographically distributed MySQL high availability solution. It's part of a longer series on some high availability reference architecture solutions over geographically distributed areas. Part 1: Reference Architecture(s) for High Availability Solutions in Geographic Distributed Scenarios: Why Should I Care? Percona consulting's main aim is to identify simple solutions to complex problems. We try to focus on identifying the right tool, a more efficient solution, and what can be done to make our customers' lives easier. We believe in doing the work once, doing it well and have more time afterward for other aspects of life. In our journey, we often receive requests for help – some simple, some complicated.

Scenario

The company "ACME Inc." is moving its whole business from a monolithic application to a distributed application, split into services. Each different service deals with the requests independently from each other. Some services follow the tightly-bounded transactional model, and others work/answer asynchronously. Each service can access the data storage layer independently. In this context, ACME Inc. identified the need to distribute the application services over wide geographic regions, focusing on each region achieving scale independently. The identified regions are:

  • North America
  • Europe
  • China

ACME Inc. is also aware of the fact that different legislation acts on each region. As such, each region requires independent information handling about sales policies, sales campaigns, customers, orders, billing and localized catalogs, but will share the global catalog and some historical aggregated data. While most of the application services will work feeding and reading local distributed caches, the basic data related to the catalog, sales and billing is based on an RDBMS. Historical data is instead migrated to a “Big Data” platform, and aggregated data is elaborated and push to a DWH solution at HQ. The application components are developed using multiple programming languages, depending on the service. The RDBMS identified by ACME Inc. in collaboration with the local authorities was MySQL-oriented. There were several solutions like:

  • PostgreSQL
  • Oracle DB
  • MS SQL server

We excluded closed-source RDBMSs given that some countries imposed a specific audit plugin. This plugin was only available for the mentioned platforms. The cost of parallel development and subsequent maintenance in case of RDBMS diversification was too high. As such all the regions must use the same major RDBMS component. We excluded PostgreSQL given that compared to the adoption of MySQL, utilization cases were higher and MySQL had a well-defined code producer. Finally, the Business Continuity team of ACME Inc., had defined an ITSC (Information Technology Service Continuity) plan that defined the RPO (Recovery Point Objective), the RTO (Recovery Time Objective) and system redundancy. That’s it. To fulfill the ITSCP, each region must have the critical system redundantly replicated in the same region, but not on the proximity.

Talking About the Components

This is a not-so-uncommon scenario, and it also presents a lot of complexity if you try to address it with one solution. But let's analyze it and see how we can simplify the approach while still meeting the needs and requirements of ACME Inc. When using MySQL-based solutions, the answer to "what should we use?" is use what best fits your business needs. The "nines" availability reference table for the MySQL world (most RDBMSs) can be summarized below:

9 0. 0 0 0 % (36 days) MySQL Replication
9 9. 9 0 0 % (8 hours) Linux Heartbeat with DRBD (Obsolete DRBD)
9 9. 9 0 0 % (8 hours) RHCS with Shared Storage (Active/Passive)
9 9. 9 9 0 % (52 minutes) MHA/Orchestrator with at least three nodes
9 9. 9 9 0 % (52 minutes) DRBD and Replication (Obsolete DRBD)
9 9 .9 9 5 % (26 minutes) Multi-Master (Galera replication) 3 node minimum
9 9. 9 9 9 % (5 minutes) MySQL Cluster

An expert will tell you that it always doesn't make sense to go for the most "nines" in the list. This because each solution comes with a tradeoff: the more high availability (HA) you get, the higher the complexity of the solution and in managing the solution. For instance, the approach used in MySQL Cluster (NDB) makes this solution not suitable for generic utilization. It requires proper analysis of the application needs, data utilization and archiving before being selected. It also requires in-depth knowledge to properly manage the cluster, as it is more complex than other similar solutions. This indirectly makes a solution based on MySQL+Galera replication the one with the highest HA level a better choice, since it is close to the defaults generalized utilizations. This is why MySQL+Galera replication has become in the last six years the most used solution for platform looking for very high HA, without the need to diverge from standard MySQL/InnoDB approach. You can read more about Galera replication: http://galeracluster.com/products/ Read more about Percona XtraDB Cluster. There are several distributions implementing Galera replication:

*Note that MariaDB Cluster/Server and all related solutions coming from MariaDB have significantly diverged from the MySQL mainstream. This often means that once migrated to MariaDB; your database will not be compatible with other MySQL solutions. In short, you are locked-in to MariaDB. It is recommended that you carefully evaluate the move to MariaDB before making that move.

Choosing the Components RDBMS

Our advice is to use Percona XtraDB Cluster (PXC), because at the moment it is one of the most flexible and reliable and compatible solutions. PXC is composed of three main components:

The cluster is normally composed of three nodes or more. Each node can be used as a Master, but the preferred and recommended way is to use one node as a Writer and the other as Readers. Application-wise, accessing the right node can be challenging since this means you need to be aware of which node is the writer, which is the reader, and be able to shift from one to the other if necessary.

Proxy

To simplify this process, it helps to have an additional component that works as a “proxy” connecting the application layer to the desired node(s). The most popular solutions are:

  • HAProxy
  • ProxySQL

There are several important differences between the two. But in summary, ProxySQL is a Level 7 proxy and is MySQL protocol aware. So, while HAProxy is just passing the connection over as a forward proxy (level 4), ProxySQL is aware of what is going through it and acts as reverse proxy. With ProxySQL is possible to decide, based on several parameters, where to send traffic (read/write split and more), what must be stopped, or if we should rewrite an incoming SQL command. A lot of information is available on the ProxySQL website https://github.com/sysown/proxysql/wiki and on the Percona Database Performance Blog .

Backup/Restore

No RDBMS platform is safe without a well-tested procedure for backup and recovery. The Percona XtraDB Cluster package distribution comes with Percona XtraBackup as the default method for node provisioning. A good backup and restore (B/R) policy start from the consideration of ACME's ITSCP, to have full and incremental backups, perfectly covering the RPO, and a good recovery procedure to keep the recovery time inside RTO whenever possible. There are several tools that allow you to plan and execute backup/restore procedure, some coming from vendors other than open source and community-oriented. In respect to being a fully open source and community-oriented, we in consulting normally suggest using: https://github.com/dotmanila/pyxbackup. Pyxbackup is a wrapper around XtraBackup that helps simplify the B/R operations, including the preparation of a full and incremental set. This helps significantly reduce the recovery time.

Disaster Recovery

Another very important aspect of the ITSC Plan is the capacity of the system to survive to major disasters. The disaster and recovery (DR) solution must be able to act as the main production environment. Therefore, it must be designed and scaled as the main production site in resources. It must be geographically separated, normally hundreds of kilometers or more. It must be completely independent of the main site. It must be as much as possible in sync with the main production site. While the first three “musts” are easy to understand, the fourth one is often the object of misunderstanding. The concept of be as much in sync with the production site as possible creates confusion in designing HA solutions with Galera replication involved. The most common misunderstanding is the misuse of the Galera replication layer. Mainly the conceptual confusion between tightly coupled database cluster and loosely coupled database cluster. Any solution based on Galera replication is a tightly coupled database cluster, because the whole idea is to be data-centric, synchronously distributed and consistent. The price is that this solution cannot be geographically distributed. Solutions like standard MySQL replication are instead loosely coupled database cluster and they are designed to be asynchronous. Given that, the nodes connected by it are completely independent in processing/apply the transaction, and the solution fits perfectly into ANY geographically distributed replication solution. The price is that data on the receiving front might not be up to date with the one from the source in that specific instant. The point is that for the DR site the ONLY valid solution is the asynchronous link (loosely coupled database cluster), because by design and requirement the two sites must be separated by a significant number of kilometers. For better understanding about why synchronous replication cannot work in a geographically distributed scenario, see "Misuse of Geographic Node distribution with Galera-based replication". In our scenario, the use of Percona XtraDB Cluster helps to create a most robust asynchronous solution. This is because each tightly coupled database cluster, no matter if source or destination, will be seen by the other tightly coupled database cluster as a single entity. What it means is that we can shift from one node to another inside the two clusters, still confident we will have the same data available and the same asynchronous stream passing from one source to the other. To ensure this procedure is fully automated, we add to our architecture the last block: replication manager for Percona XtraDB Cluster (https://github.com/y-trudeau/Mysql-tools/tree/master/PXC). RMfP is another open source tool that simplifies and automates failover inside each PXC cluster such that our asynchronous solution doesn't suffer if the node is currently acting as Master fails.

How to Link the Components

Summarizing all the different components of our solution:

  • Application stack
    • Load balancer
    • Application nodes by service
    • Distributed caching
    • Data access service
  • Database stack
    • Data proxy (ProxySQL)
    • RDBMS (Percona XtraDB Cluster)
    • Backup/Restore
      • Xtrabackup
      • Pyxbackup
      • Custom scripts
    • DR
      • Replication Manager for Percona XtraDB Cluster
  • Monitoring
    • PMM (not covered here see <link> for detailed information)

 

In the solution above, we have two locations separated by several kilometers. On top of them, the load balancer(s)/DNS resolution redirects the incoming traffic to the active site. Each site hosts a full application stack, and applications connect to local ProxySQL. ProxySQL has read/write enabled to optimize the platform utilization, and is configured to shift writes from one PXC node to another in case of node failure. Asynchronous replication connects the two locations and transmits data from master to slave. Note that with this solution, it is possible to have multiple geographically distributed sites. Backups are taken at each site independently and recovery test is performed. RMfP oversees and modifies the replication channels in the case of a node failure. Finally, Percona Monitoring and Management (PMM) is in place to perform in-depth monitoring of the whole database platform.


Conclusions

We always look for the most efficient, manageable, user-friendly combination of products, because we believe in providing and supporting the community with simple but efficient solutions. What we have presented here is the most robust and stable high availability solution in the MySQL space (except for MySQL NDB that we have excluded). It is conceptualized to provide maximum service continuity, with limited bonding between the platforms/sites. It also is a well-tested solution, that has been adopted and adapted in many different scenarios where performance and real HA are a must. I have preferred to keep this digression at a high level, given the details of the implementation have already been discussed elsewhere (see reference section for more reading). Still, Percona XtraDB Cluster (as any other solution implementing Galera replication) might not fit the final use. Given that, it is important to understand where it does and doesn’t fit. This article is a good summary with examples: Is Synchronous Replication right for your app?. Check out the next article on How Not to do MySQL High Availability.

References

https://www.percona.com/blog/2016/06/07/choosing-mysql-high-availability-solutions/

https://dev.mysql.com/doc/mysql-ha-scalability/en/ha-overview.html

https://www.percona.com/blog/2014/11/17/typical-misconceptions-on-galera-for-mysql/

http://galeracluster.com/documentation-webpages/limitations.html

http://tusacentral.net/joomla/index.php/mysql-blogs/170-geographic-replication-and-quorum-calculation-in-mysqlgalera.html

http://tusacentral.net/joomla/index.php/mysql-blogs/167-geographic-replication-with-mysql-and-galera.html

http://tusacentral.net/joomla/index.php/mysql-blogs/164-effective-way-to-check-the-network-connection-when-in-need-of-a-geographic-distribution-replication-.html

http://tusacentral.net/joomla/index.php/mysql-blogs/183-proxysql-percona-cluster-galera-integration.html https://github.com/sysown/proxysql/wiki

Categories: Web Technologies

Migrating to Amazon Aurora: Reduce the Unknowns

Planet MySQL - Wed, 11/14/2018 - 13:28

Migrating to Amazon Aurora. Shutterstock.com

In this Checklist for Success series, we will discuss reducing unknowns when hosting in the cloud using and migrating to Amazon Aurora. These tips might also apply to other database as a service (DBaaS) offerings.

While DBaaS encapsulates a lot of the moving pieces, it also means relying on this approach for your long-term stability. This encapsulation is a two-edged sword that takes away your visibility into performance outside of the service layer.

Shine a Light on Bad Queries

Bad queries are one of the top offenders of downtime. Aurora doesn’t protect you against them. Performing a query review as part of a routine health check of your workload helps ensure that you do not miss looming issues. It also helps you predict the workload on specific times and events. For example, if you already know your top three queries tend to exponentially increase, and are read bound, you can easily decide to increase the number of read-replicas on your cluster.

Having historical query performance data helps makes this task easier and less stressful. While historical data allows you to look backward, it’s also very valuable to have a tool that lets you look at active incident scenarios in progress. Knowing what queries are currently running when suffering from performance issues reduces guesswork and helps solve problems faster.

Pick Your Tool(s)

There are a number of ways you can achieve query performance excellence. Performance Insights is a built-in offering from AWS that is tightly integrated with RDS. It has a seven-day free retention period, with an extra cost beyond that. It is available for each instance in a cluster. Performance Insights takes most of its metrics from the Performance_Schema. It includes specific metrics from the operating system that may not be available from regular Cloudwatch metrics.

Query Analytics from Percona Monitoring and Management (PMM) also uses the same source as Performance Insights: the Performance Schema. Unlike Performance Insights though, PMM is deployed separately from the cluster. This means you can keep your metrics even if you keep recycling your cluster instances. With PMM, you can also consolidate your query reviews from a single location, and you can monitor your cluster instances from the same location – including an extensive list of performance metrics.

You can enable Performance Insights and configure for the default seven-day retention period, and then combine with PMM for longer retention period across all your cluster instances. Note though that PMM may incur a cost for additional API calls to retrieve performance insight metrics.

Outside of the built-in and open source alternative, VividCortex, NewRelic and Datadog are excellent tools that do everything we discussed above and more. NewRelic, for example, allows you to take a good view of the database, application and external requests timing. This, in my opinion, is so very valuable.

Bad queries are not only the potential unknowns. Deleted rows, dropped tables, crippling schema changes, and even AZ/Region failures are realities in the cloud. We will discuss them next! Stay “tuned” for part two.

Meanwhile, we’d like to hear your success stories in Amazon Aurora in the comments below!

Categories: Web Technologies

Pages