emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Guide to Managing State in Vue.js

Echo JS - Thu, 06/28/2018 - 15:32
Categories: Web Technologies

How to create a logo that responds to its own aspect ratio

CSS-Tricks - Thu, 06/28/2018 - 14:16

One of the cool things about <svg> is that it's literally its own document, so @media queries in CSS inside the SVG are based on its viewport rather than the HTML document that likely contains it.

This unique feature has let people play around for years. Tim Kadlec experimented with SVG formats and which ones respect the media queries most reliably. Sara Soueidan experimented with that a bunch more. Jake Archibald embedded a canvas inside and tested cross-browser compatibility that way. Estelle Weyl used that ability to do responsive images before responsive images.

Another thing that has really tripped people's triggers is using that local media query stuff to make responsive logos. Most famously Joe Harrison's site, but Tyler Sticka, Jeremy Frank, and Chris Austin all had a go as well.

Nils Binder has the latest take. Nils take is especially clever in how it uses <symbol>s referencing other <symbol>s for extra efficiency and min-aspect-ratio media queries rather than magic number widths.

For the record, we still very much need container queries for HTML elements. I get that it's hard, but the difficulty of implementation and usefulness are different things. I much prefer interesting modern solutions over trying to be talked out of it.

Direct Link to ArticlePermalink

The post How to create a logo that responds to its own aspect ratio appeared first on CSS-Tricks.

Categories: Web Technologies

Improving Page Load Performance: Pingdom, YSlow and GTmetrix - SitePoint PHP

Planet PHP - Thu, 06/28/2018 - 11:00

Optimizing websites for speed is a craft, and each craft requires tools. The most-used website optimization tools are GTmetrix, YSlow and Pingdom Tools.

GTmetrix is a rather advanced tool that offers a lot on its free tier, but it also offers premium tiers. If you sign up, you can compare multiple websites, multiple versions of the same website, tested under different conditions, and save tests for later viewing.

YSlow is still relevant, although its best days were those when Firebug ruled supreme among the browser inspectors. It offers a Chrome app and other implementations --- such as add-ons for Safari and Opera, a bookmarklet, an extension for PhantomJS, and so on.

For advanced users, PhantomJS integration means that one could, for example, automate the testing of many websites --- hundreds or thousands --- and export the results into the database.

YSlow's Ruleset Matrix has for a number of years been a measuring stick for website performance.

Pingdom Tools is a SaaS service that offers monitoring and reporting of website performance, and it has strengthened its market position in recent years. It also offers a DNS health check and website speed testing on its free tier, which is comparable to GTMetrix and YSlow.

For the purposes of this article, we purchased a fitting domain name --- ttfb.review --- and installed Drupal with some demo content on it. We also installed WordPress on wp.ttfb.review, and demo installations of Yii and Symfony on their respective subdomains.

We used the default WordPress starting installation. For Drupal, we used the Devel and Realistic Dummy Content extensions to generate demo content. For Symfony we used the Symfony demo application, and for Yii we used basic application template.

This way, we'll be able to compare these installations side-by-side, and point out the things that deserve attention.

Please be aware that these are development-level installations, used only for demonstration purposes. They aren't optimized for running in production, so our results are likely to be subpar.

The post Improving Page Load Performance: Pingdom, YSlow and GTmetrix appeared first on SitePoint.

Categories: Web Technologies

How NOT to Monitor Your Database

Planet MySQL - Thu, 06/28/2018 - 10:24

Do you have experience putting out backend database fires? What were some things you wished you had done differently? Proactive database monitoring is more cost efficient, manageable, and sanity-saving than reactive monitoring. We reviewed some of the most common mistakes - too many log messages, metric “melting pots,” retroactive changes, incomplete visibility, undefined KPIs - and put together an action plan on how to prevent them. From our experience, we've listed out the top 5 biggest (and preventable!) database monitoring pitfalls.

Log Levels

There never seem to be enough logging levels to capture the desired granularity and relevance of a log message accurately. Is it INFO, TRACE, or DEBUG? What if it’s DEBUG but it’s for a condition we should WARN about? Is there really a linear hierarchy here? If you’re like most people, you’ve seen at least once an extension of those types of standard logging levels on top of a widely available logging system in an attempt to add even more custom levels. There exists a good argument that there should really only be two types of log messages: those useful for writing and debugging the code, and those useful for operating it. Dave Cheney has a good blog post about this differentiation.

Mixed Status and Configuration Variables

Many systems don’t distinguish between status variables, which signal the system’s state, and configuration variables, which are inputs to the system’s operation. For example, in both MySQL and Redis, the commands to get system status will return mixtures of configuration variables and status metrics. Such a metrics “melting pot” is a very common problem that usually requires custom code or exception lists (blacklist/whitelist) to identify which variables are what. 

Breaking Backwards Compatibility

If you change the meaning or dimensions of a metric, ideally you should leave the old behavior unchanged and introduce a replacement alongside it. Failure to do this causes a lot of work for other systems. For example, in MySQL, the SHOW STATUS command was changed to include connection-specific counters by default, with the old system-wide global counters accessible via a different query syntax. This change was just a bad decision, and it caused an enormous amount of grief. Likewise, the meaning of MySQL’s “Questions” status variable was changed at one point, and the old behavior was available in a new variable called “Queries.” Essentially, they renamed a variable and then introduced a new, different variable with the same name as the old one. This change also caused a lot of confusion. Don’t do this.

Incomplete Visibility

Again, the easiest example of this is in MySQL, which has had a SHOW VARIABLES command for many years. Most, but not all, of the server’s command line options had identically named variables visible in the output of this command. But some were missing entirely, and others were present but under names that didn’t match.

Missing KPIs

The list of crucial metrics for finding and diagnosing performance issues isn’t that large. Metrics such as utilization, latency, queue length, and the like can be incredibly valuable, and can be computed from fundamental metrics, if those are available. For an example, see the Linux /proc/diskstats metrics, which include values that you can analyze with queueing theory, as illustrated on Baron’s personal blog. But you’d be surprised how many systems don’t have any way to inspect these key metrics, because people without much knowledge of good monitoring built the systems. For example, PostgreSQL has a standard performance counter for transactions, but not for statements, so if you want to know how many queries (statements) per second your server is handling, you have to resort to much more complex alternatives. This lack of a basic performance metric (throughput) is quite a serious oversight.

These are just some don’ts for developing and monitoring database applications. Interested in learning some of the do’s? Download the full eBook, Best Practices for Architecting Highly Monitorable Applications.

Categories: Web Technologies

Detecting Incompatible Use of Spatial Functions before Upgrading to MySQL 8.0

MySQL Server Blog - Thu, 06/28/2018 - 10:01

There are many changes to spatial functions in MySQL 8.0:

The first two are failing cases.…

Categories: Web Technologies

Detecting Incompatible Use of Spatial Functions before Upgrading to MySQL 8.0

Planet MySQL - Thu, 06/28/2018 - 10:01

There are many changes to spatial functions in MySQL 8.0:

The first two are failing cases.…

Categories: Web Technologies

101 Switching Protocols - Evert Pot

Planet PHP - Thu, 06/28/2018 - 08:00

101 Switching Protocols is a status code that’s used for a server to indicate that the TCP conncection is about to be used for a different protocol.

The best example of this is in the WebSocket protocol. WebSocket uses a HTTP handshake when creating the connection, mainly for security reasons.

When a WebSocket client starts the connection, the first few bytes will look like this:

GET /chat HTTP/1.1 Host: server.example.com Upgrade: websocket Connection: Upgrade Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Origin: http://example.com Sec-WebSocket-Protocol: chat, superchat Sec-WebSocket-Version: 13

If the server supports WebSocket, it will response with 101 Switching Protocols and then switch to the WebSocket protocol:

HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo= Sec-WebSocket-Protocol: chat

HTTP/2 also uses this mechanism to upgrade from a HTTP/1.1 to a non-ssl HTTP/2 connection. See rfc7540.

Categories: Web Technologies

Empower Through Web Development

CSS-Tricks - Thu, 06/28/2018 - 07:56

As a person with a disability, I appreciate the web and modern-day computing for their many affordances. The web is a great place to work and share and connect. You can make a living, build your dream, and speak your mind.

It’s not easy, though. Beginners struggling with the box model often take to Google in search of guidance (and end up at this very website). More seasoned developers find themselves hopping from framework to framework trying to keep up. We experience plenty of late nights, console logs, and rage quits.

It’s in times like these that I like to remember why I’m doing this thing. And that’s because of the magic of creating. Making websites is empowering because it allows you to shape the world—in ways big and small, public and personal. It’s especially powerful for people with disabilities, who gain influence on the tech that they rely on. And, hey, you can do web stuff for fun and profit.

The magic of craft

I knew I wanted to be a person who makes websites the first time I ever wrote HTML, hit reload, and saw my changes. I felt the power coursing through my veins as I FTP-ed my site to my shiny new web server under my very own domain name. My mind jumped into the future, imagining my vast web empire.

I never got around to making an empire, but I had something most of my friends didn’t—a personal homepage. I don’t care how ugly or weird they might be, I think everyone should have a homepage they made for themselves. I love going to personal web homes—you know, just to check out what they’ve done with the place.

I love that the web can be this artisanal craft. My disability (spinal muscular atrophy) precludes me from practicing many crafts one might pick up. And yet as a child, I watched as my grandfather would take to his workshop and emerge with all sorts of wooden inventions—mostly toys for me, of course. I came to appreciate the focus and dedication he put into it. Regardless of what terrible things were going on in life, he could escape into this wonderful world of creation, systems, and problem-solving. I wanted that too. Years later, I found my craft. What with its quirky boxes, holy wars, and documentation.

But I loved it. I couldn’t get enough. And finally I realized… this can be my life’s work. I can get paid to make things on the internet.

But first, some not great facts about employment

Employment levels of people with disabilities are low, and those who are employed tend to be in low-paying occupations.

— U.S. Department of Labor

The U.S. Department of Labor (DOL) keeps statistics on employment levels for people with disabilities and, let me tell you, they kind of suck. The DOL goes on to say that only a third of working-age people with disabilities were employed, on average, in the 2010-2012 period. In contrast, over two-thirds of people without disabilities were employed in the same period.

This problem affected me personally as I struggled to get a job after college. Fortunately, the web is a great tool for breaking down barriers. With its vast learning resources, free and open source software, and plethora of ways to connect and share, the web makes possible all sorts of employment opportunities. I'm now a designer/developer at Mad Genius and loving it.

I urge anyone—disabled or not—who finds themselves with limited access to more conventional jobs to consider working on the web. Whether you draw, write, design, or code, there’s something here for you. This big web we’re building for everyone should be built by everyone.

We need that thing that you’re going to build

Last fall, I launched A Fine Start, a web service and browser extension for turning your new tab page into a minimal list of links. Some people saw it when Chris mentioned it in an article and it picked up quite a few new users. But what those users don’t know is that A Fine Start began life as an assistive technology.

Because of my extremely weak muscles, I type using a highly customized on-screen keyboard. It’s doable, but tedious, so I use the mouse method of getting things done wherever possible. I could open a new tab and start typing, but I would rather click. As a result, I made Start, a one-file tool that allowed me to save lists of links and get at them quickly when set as my new tab page—no keyboard necessary.

It was fantastic and I realized I needed Start on every computer I used, but I had no way of getting my bookmarks on another device without sticking them in a text file in Dropbox. So, last year, I wrote a backend service, polished the design, made a Chrome extension, and threw my baby out of the nest to see if it would fly. We’ll see.

The point is, there’s something the world needs, waiting to be built. And you are the only one who can build it. The recipe is mediocre ideas, showing up, and persistence. The web needs your perspective. Load up on your caffeinated beverage of choice, roll up your sleeves, and practice your craft. You’ve got the power. Now use it.

The post Empower Through Web Development appeared first on CSS-Tricks.

Categories: Web Technologies

Free E-book: ​Modernize for Mobile Apps

CSS-Tricks - Thu, 06/28/2018 - 07:54

(This is a sponsored post.)

No sign up required to read the free e-book.

Building modern apps (mobile, PWAs or Single Page Apps) that connect to legacy or enterprise systems is a pain. We put together an e-book that discusses the various options for how to make it all work. Here are some of the chapter contents:

  • The Challenges of Migrating to a Modern Mobile Architecture
  • Strategies for Migrating to Mobile
  • Strategies for Migrating to the Cloud
  • Data & Mobile Applications
  • Future-Proofing Your Modernization Efforts

Check out our Mobile Modernization: Architect Playbook

Direct Link to ArticlePermalink

The post Free E-book: ​Modernize for Mobile Apps appeared first on CSS-Tricks.

Categories: Web Technologies

MySQL Swapping With Fsync

Planet MySQL - Thu, 06/28/2018 - 07:35

One problem that’s a lot less common these days is swapping. Most of the issues that cause swapping with MySQL have been nailed down to several different key configuration points, either in the OS or MySQL, or issues like the swap insanity issue documented by Jeremy Cole back in 2010. As such, it’s usually pretty easy to resolve these issues and keep MySQL out of swap space. Recently, however, we had tried all of the usual tricks but had an issue where MySQL was still swapping.

The server with the issue was a VM running with a single CPU socket (multiple cores), so we knew it wasn’t NUMA. Swappiness and MySQL were both configured correctly and when you checked the output of free -m it showed 4735M of memory available.

[sylvester@host~]$ free -m total used free shared buff/cache available Mem: 16046 10861 242 16 4941 4735 Swap: 15255 821 14434

The point that needs a bit more attention here is the amount of memory being used by the OS cache. As you can see, there is a total of 16046M of physical memory available to the host, with only 10861M in use and the majority of what’s left over being used by the OS cache. This typically isn’t a problem. When requests for more memory come in from threads running in the OS, it should evict pages from the OS cache in order to make memory available to the requesting process. In this case, this did not occur. Instead, we observed that it held onto that cache memory and forced MySQL to turn to swap. But why?

As it turns out, the system in question had recently been converted from MYISAM to InnoDB and hadn’t had any server configuration set to accommodate for this. As such it was still configured for innodb_flush_method at the default value, which in 5.7 is still fsync. Both Ivan Groenwold and I have both written blog posts in regards to flush methods, and it’s been generally accepted that O_DIRECT is a much better way to go in most use cases on Linux, including this one, so we wanted to get the system in question more aligned with best practices before investigating further. As it turns out, we didn’t have to look any further than this, as switching the system over to innodb_flush_method = O_DIRECT resolved the issue. It appears that fsync causes the kernel to want to hang onto its data pages, so when innodb attempted to expand its required amount of memory, it was unable to do so without accessing swap, even with swappiness set to 0 to test.

Ever since we did the change to O_DIRECT, the OS cache usage has dropped and there have been no problems with OS cache page eviction.


MySQL swapping can really ruin your day and it’s something you want to avoid if at all possible. We still run into issues with swapping every now and then and want to continue to provide the community with our findings as they become available. So if you have a server that is swapping, and the OS cache isn’t making room for MySQL, and if you’re still using fsync for InnoDB flush, consider switching to O_DIRECT.

Categories: Web Technologies

JFG Posted on the Percona Community Blog - A Nice Feature in MariaDB 10.3: no InnoDB Buffer Pool in Core Dumps

Planet MySQL - Thu, 06/28/2018 - 06:48
I just posted an article on the Percona Community Blog.  You can access it following this link: A Nice Feature in MariaDB 10.3: no InnoDB Buffer Pool in Core Dumps I do not know if I will stop publishing posts on my personal blog or use both, I will see how things go...  In the rest of this post, I will share why I published there and how things went in the process. So there is a Percona
Categories: Web Technologies

A Nice Feature in MariaDB 10.3: no InnoDB Buffer Pool in Core Dumps

Planet MySQL - Thu, 06/28/2018 - 05:28

MariaDB 10.3 is now generally available (10.3.7 was released GA on 2018-05-25). The article What’s New in MariaDB Server 10.3 by the MariaDB Corporation lists three key improvements in 10.3: temporal data processing, Oracle compatibility features, and purpose-built storage engines. Even if I am excited about MyRocks and curious on Spider, I am also very interested in less flashy but still very important changes that make running the database in production easier. This post describes such improvement: no InnoDB Buffer Pool in core dumps.

Hidden in the Compression section of the page Changes & Improvements in MariaDB 10.3 from the Knowledge Base, we can read:

On Linux, shrink the core dumps by omitting the InnoDB buffer pool

This is it, no more details, only a link to MDEV-10814 (Feature request: Optionally exclude large buffers from core dumps). This Jira ticket was open in 2016-09-15 by a well-known MariaDB Support Engineer: Hartmut Holzgraefe. I know Booking.com was asking for this feature for a long time, this is even mentioned by Hartmut in a GitHub comment.

The ways this feature eases operations with MariaDB are well documented by Hartmut in the description of the Jira ticket:

  • it needs less available disk space to store core dumps,
  • it reduces the time required to write core dumps (and hence restart MySQL after a crash),
  • it improves security by omitting substantial amount of user data from core dumps.

In addition to that, I would add that smaller core dumps are easier to share in tickets. I am often asked by support engineers to provide a core dump in relation to a crash, and my reply is “How do you want me to give you with a 192 GB file ?” (or even bigger files as I saw MySQL/MariaDB being used on servers with 384 GB of RAM). This often leads to a “Let me think about this and I will come back to you” answer. Avoiding the InnoDB Buffer Pool in core dumps makes this less of an issue for both DBAs and support providers.

Before continuing the discussion on this improvement, I need to give more details about what a core dump is.

What is a Core Dump and Why is it Useful ?

By looking at the Linux manual page for core (and core dump file), we can read:

[A core dump is] a disk file containing an image of the process’s memory at the time of termination. This image can be used in a debugger to inspect the state of the program at the time that it terminated.

The Wikipedia article for core dump also tells us that:

  • the core dump includes key pieces of program state as processor registers, memory management details, and other processor and operating system flags and information,
  • the name comes from magnetic core memory, the principal form of random access memory from the 1950s to the 1970s, and the name has remained even if magnetic core technology is obsolete.

So a core dump is a file that can be very useful to understand the context of a crash. The exact details of how to use a core dump have been already discussed in many places and is beyond the subject of this post. The interested reader can learn more by following those links:

Now that we know more about core dumps, we can get back to the discussion of the new feature.

The no InnoDB Buffer Pool in Core Dump Feature from MariaDB 10.3

As already pointed out above, there are very few details in the release notes about how this feature works. By digging in MDEV-10814, following pointers to pull requests (#333, #364, 365, …), and reading the commit message, I was able to gather this:

  • An initial patch was written by Hartmut in 2015.
  • It uses theMADV_DONTDUMPflag to the madvise system call (available in Linux kernel 3.4 and higher).
  • Hartmut’s patch was rebased by Daniel Black, a well-known MariaDB Community Contributor (pull request #333).
  • The first work by Daniel had a configuration parameter to allow including/excluding the InnoDB Buffer Pool in/from core dumps, but after a discussion in pull request #333, it was decided that the RELEASE builds would not put the InnoDB Buffer Pool in core dumps and that DEBUG builds would include it (more about this below).
  • The function buf_madvise_do_dump is added but never invoked by the server; it is there to be called from a debugger to re-enable full core dumping if needed (from this commit message).
  • The InnoDB Redo Log buffer is also excluded from core dumps (from this comment).

I have doubts about the absence of a configuration parameter for controlling the feature. Even if the InnoDB Buffer Pool (as written above, the feature also concerns the InnoDB Redo Log buffer, but I will only mention InnoDB Buffer Pool in the rest of this post for brevity) is not often required in core dumps, Marko Mäkelä, InnoDB Engineer at MariaDB.com, mentioned sometimes needing it to investigate deadlocks, corruption or race conditions. Moreover, I was recently asked, in a support ticket, to provide a core dump to understand a crash in MariaDB 10.2 (public bug report in MDEV-15608): it looks to me that the InnoDB Buffer Pool be useful here. Bottom line: having the InnoDB Buffer Pool (and Redo log buffer) in core dumps might not be regularly useful, but it is sometimes needed.

To include the InnoDB Buffer Pool in core dumps, DBAs can install DEBUG binaries or they can use a debugger to call the

buf_madvise_do_dumpfunction (well thought Daniel for compensating the absence of a configuration parameter, but there are caveats described below). Both solutions are suboptimal in my humble opinion. For #2, there are risks and drawbacks of using a debugger on a live production database (when it works … see below for a war story). For #1 and unless I am mistaken, DEBUG binaries are not available from the MariaDB download site. This means that they will have to be built by engineers of your favorite support provider, or that DBAs will have to manually compile them: this is a lot of work to expect from either party. I also think that the usage of DEBUG binaries in production should be minimized, not encouraged (DEBUG binaries are for developers, not DBAs); so I feel we are heading in the wrong direction. Bottom line: I would not be surprised (and I am not alone) that a parameter might be added in a next release to ease investigations of InnoDB bugs.

Out of curiosity, I checked the core dump sizes for some versions of MySQL and MariaDB with dbdeployer (if you have not tried it yet, you should probably spend time learning how to use dbdeployer: it is very useful). Here are my naive first results with default configurations and freshly started

  • 487 MB and 666 MB core dumps with MySQL 5.7.22 and 8.0.11 respectively,
  • 673 MB and 671 MB core dumps with MariaDB 10.2.15 and MariaDB 10.3.7 respectively.

I tried understanding where the inflation is coming from in MySQL 8.0.11 but I tripped on Bug#90561 which prevents my investigations. We will have to wait for 8.0.12 to know more…

Back to the feature, I was surprised to see no shrinking between MariaDB 10.2 and 10.3. To make sure something was not wrong, I tried to have the InnoDB Buffer Pool in the core dump by calling the

buf_madvise_do_dump function. I used the slides from the gdb tips and tricks for MySQL DBAs talk by Valerii Kravchuk presented at FOSDEM 2015 (I hope a similar talk will be given soon at Percona Live as my gdb skills need a lot of improvements), but I got the following result:$ gdb -p $(pidof mysqld) -ex "call buf_madvise_do_dump()" -batch [...] No symbol "buf_madvise_do_dump" in current context.

After investigations, I understood that the generic MariaDB Linux packages that I used with dbdeployer are compiled without the feature. A reason could be that there is no way to know that those packages will be used on a Linux 3.4+ kernel (without a recent enough kernel, the

MADV_DONTDUMPargument does not exist for themadvisesystem call). To be able to test the feature, I would either have to build my own binaries or try packages for a specific distribution. I chose to avoid compilation but this was more tedious than I thought…

By the way, maybe the

buf_madvise_do_dumpfunction should always be present in binaries and return a non-zero value when failing with a detailed message in the error logs. This would have spared me spending time understanding why it did not work in my case. I opened MDEV-16605: Always include buf_madvise_do_dumpin binaries for that.

Back to my tests and to see the feature in action, I started a Ubuntu 16.04.4 LTS in AWS (it comes with a 4.4 kernel). But again, I could not call

buf_madvise_do_dump. After more investigation, I understood that the Ubuntu and Debian packages are not compiled with symbols, so callingbuf_madvise_do_dumpcannot be easily done on those (I later learned that there are mariadb-server-10.3-dbgsym packages, but I did not test them). I ended-up falling back to Centos 7.5, which comes with a 3.10 kernel, and it worked ! Below are the core dump sizes with and without callingbuf_madvise_do_dump:
  • 527 MB core dump on MariaDB 10.3.7 (without callingbuf_madvise_do_dump),
  • 674 MB core dump on MariaDB 10.3.7 (with callingbuf_madvise_do_dump).

I was surprised by bigger core dumps in MariaDB 10.3 than in MySQL 5.7, so I spent some time looking into that. It would have been much easier with the Memory Instrumentation from Performance Schema, but this is not yet available in MariaDB. There is a Jira ticket opened for that (MDEV-16431); if you are also interested in this feature, I suggest you vote for it.

I guessed that the additional RAM used by MariaDB 10.3 (compared to MySQL 5.7) comes from the caches for the MyISAM and Aria storage engines. Those caches, whose sizes are controlled by the key_buffer_size and aria_pagecache_buffer_size parameters, are 128 MB by default in MariaDB 10.3 (more discussion about these sizes below). I tried shrinking both caches to 8 MB (the default value in MySQL since at least 5.5), but I got another surprise:

> SET GLOBAL key_buffer_size = 8388608; Query OK, 0 rows affected (0.001 sec) > SET GLOBAL aria_pagecache_buffer_size = 8388608; ERROR 1238 (HY000): Variable 'aria_pagecache_buffer_size' is a read only variable

The aria_pagecache_buffer_size parameter is not dynamic ! This is annoying as I like tuning parameters to be dynamic, so I opened MDEV-16606: Makearia_pagecache_buffer_sizedynamic for that. I tested with only shrinking the MyISAM cache and by modifying the startup configuration for Aria. The results for the core dump sizes are the following:

  • 527 MB core dump for the default behavior,
  • 400 MB core dump by shrinking the MyISAM cache from 128 MB to 8 MB,
  • 268 MB core dump by also shrinking the Aria cache from 128 MB to 8 MB.

We are now at a core dump size smaller than MySQL 5.7.22: this is the result I was expecting.

I did some more tests with a larger InnoDB Buffer Pool and with a larger InnoDB Redo Log buffer while keeping MyISAM and Aria cache sizes to 8 MB. Here are the results of the sizes of the compact core dump (default behavior) vs the full core dump (using gdb):

  • 340 MB vs 1.4 GB core dumps when growing the InnoDB Buffer Pool from 128 MB to 1 GB,
  • 357 MB vs 1.7 GB core dumps when also growing the InnoDB Redo Log buffer from 16 MB to 128 MB.

I think the results above show the usefulness of the no InnoDB Buffer Pool in core dump feature.

Potential Improvements of the Shrinking Core Dump Feature

The end goal of excluding the InnoDB Buffer Pool from core dumps is to make generating and working with those files easier. As already mentioned above, the space and time taken to save core dumps are the main obstacles, and sharing them is also an issue (including leaking a lot of user data).

Ideally, I would like to always run MySQL/MariaDB with core dump enabled on crashes (I see one exception when using database-level encryption for not leaking data). I even think this should be the default behavior, but this is another discussion that I will not start here. My main motivation is that if/when MySQL crashes, I want all information needed to understand the crash (and eventually report a bug) without having to change parameters, restart the database, and generate the same crash again. Obviously, this configuration is unsuitable for servers with a lot of RAM and with a large InnoDB Buffer Pool. MariaDB 10.3 makes a big step forward by excluding the InnoDB Buffer Pool (and Redo Log buffer) from core dumps, but what else could be done to achieve the goal of always running MySQL with core dump enabled ?

There is a pull request to exclude the query cache from core dumps (also by Daniel Black, thanks for this work). When MariaDB is run with a large query cache (and I know this is unusual, but if you know of a valid real world use case, please add a comment below), excluding it from core dumps is good. But I am not sure this is a generally needed improvement:

It looks like there is a consensus that the query cache is a very niche feature and otherwise should be disabled, so this work might not be the one that will profit most people. Still good to be done though.

I would like similar work to be done on MyISAM, Aria, TokuDB and MyRocks. As we saw above, there is an opportunity, for default deployments, to remove 256 MB from core dumps by excluding MyISAM and Aria caches. I think this work is particularly important for those two storage engines as they are loaded by default in MariaDB. By the way, and considering the relatively low usage of the MyISAM and Aria storage engine, maybe the default value for their caches should be lower: I opened MDEV-16607: Consider smaller defaults for MyISAM and Aria cache sizes for that.

I cannot think of any other large memory buffers that I would like to exclude from core dumps. If you think about one, please add a comment below.

Finally, I would like the shrinking core dump feature to also appear in Oracle MySQL and Percona Server, so I opened Bug#91455: Implement core dump size reduction for that. For the anecdote, I was recently working on a Percona Server crash in production, and we were reluctant to enable core dumps because of the additional minutes of downtime needed to write the file to disk. In this case, the no InnoDB Buffer Pool in core dump would have been very useful !

The post A Nice Feature in MariaDB 10.3: no InnoDB Buffer Pool in Core Dumps appeared first on Percona Community Blog.

Categories: Web Technologies

What To Do When MySQL Runs Out of Memory: Troubleshooting Guide

Planet MySQL - Thu, 06/28/2018 - 05:00

Troubleshooting crashes is never a fun task, especially if MySQL does not report the cause of the crash. For example, when MySQL runs out of memory. Peter Zaitsev wrote a blog post in 2012: Troubleshooting MySQL Memory Usage with a lots of useful tips. With the new versions of MySQL (5.7+) and performance_schema we have the ability to troubleshoot MySQL memory allocation much more easily.

In this blog post I will show you how to use it.

First of all, there are 3 major cases when MySQL will crash due to running out of memory:

  1. MySQL tries to allocate more memory than available because we specifically told it to do so. For example: you did not set innodb_buffer_pool_size correctly. This is very easy to fix
  2. There is some other process(es) on the server that allocates RAM. It can be the application (java, python, php), web server or even the backup (i.e. mysqldump). When the source of the problem is identified, it is straightforward to fix.
  3. Memory leaks in MySQL. This is a worst case scenario, and we need to troubleshoot.
Where to start troubleshooting MySQL memory leaks

Here is what we can start with (assuming it is a Linux server):

Part 1: Linux OS and config check
  1. Identify the crash by checking mysql error log and Linux log file (i.e. /var/log/messages or /var/log/syslog). You may see an entry saying that OOM Killer killed MySQL. Whenever MySQL has been killed by OOM “dmesg” also shows details about the circumstances surrounding it.
  2. Check the available RAM:
    • free -g
    • cat /proc/meminfo
  3. Check what applications are using RAM: “top” or “htop” (see the resident vs virtual memory)
  4. Check mysql configuration: check /etc/my.cnf or in general /etc/my* (including /etc/mysql/* and other files). MySQL may be running with the different my.cnf (run ps  ax| grep mysql )
  5. Run vmstat 5 5 to see if the system is reading/writing via virtual memory and if it is swapping
  6. For non-production environments we can use other tools (like Valgrind, gdb, etc) to examine MySQL usage
Part 2:  Checks inside MySQL

Now we can check things inside MySQL to look for potential MySQL memory leaks.

MySQL allocates memory in tons of places. Especially:

  • Table cache
  • Performance_schema (run: show engine performance_schema status  and look at the last line). That may be the cause for the systems with small amount of RAM, i.e. 1G or less
  • InnoDB (run show engine innodb status  and check the buffer pool section, memory allocated for buffer_pool and related caches)
  • Temporary tables in RAM (find all in-memory tables by running: select * from information_schema.tables where engine='MEMORY' )
  • Prepared statements, when it is not deallocated (check the number of prepared commands via deallocate command by running show global status like ‘Com_prepare_sql';show global status like 'Com_dealloc_sql'  )

The good news is: starting with MySQL 5.7 we have memory allocation in performance_schema. Here is how we can use it

  1. First, we need to enable collecting memory metrics. Run:
    UPDATE setup_instruments SET ENABLED = 'YES' WHERE NAME LIKE 'memory/%';
  2. Run the report from sys schema:
    select event_name, current_alloc, high_alloc from sys.memory_global_by_current_bytes where current_count > 0;
  3. Usually this will give you the place in code when memory is allocated. It is usually self-explanatory. In some cases we can search for bugs or we might need to check the MySQL source code.

For example, for the bug where memory was over-allocated in triggers (https://bugs.mysql.com/bug.php?id=86821) the select shows:

mysql> select event_name, current_alloc, high_alloc from memory_global_by_current_bytes where current_count > 0; +--------------------------------------------------------------------------------+---------------+-------------+ | event_name | current_alloc | high_alloc | +--------------------------------------------------------------------------------+---------------+-------------+ | memory/innodb/buf_buf_pool | 7.29 GiB | 7.29 GiB | | memory/sql/sp_head::main_mem_root | 3.21 GiB | 3.62 GiB | ...

The largest chunk of RAM is usually the buffer pool but ~3G in stored procedures seems to be too high.

According to the MySQL source code documentation, sp_head represents one instance of a stored program which might be of any type (stored procedure, function, trigger, event). In the above case we have a potential memory leak.

In addition we can get a total report for each higher level event if we want to see from the birds eye what is eating memory:

mysql> select substring_index( -> substring_index(event_name, '/', 2), -> '/', -> -1 -> ) as event_type, -> round(sum(CURRENT_NUMBER_OF_BYTES_USED)/1024/1024, 2) as MB_CURRENTLY_USED -> from performance_schema.memory_summary_global_by_event_name -> group by event_type -> having MB_CURRENTLY_USED>0; +--------------------+-------------------+ | event_type | MB_CURRENTLY_USED | +--------------------+-------------------+ | innodb | 0.61 | | memory | 0.21 | | performance_schema | 106.26 | | sql | 0.79 | +--------------------+-------------------+ 4 rows in set (0.00 sec)

I hope those simple steps can help troubleshoot MySQL crashes due to running out of memory.

Links to more resources that might be of interest

The post What To Do When MySQL Runs Out of Memory: Troubleshooting Guide appeared first on Percona Database Performance Blog.

Categories: Web Technologies

How to Improve Performance of Galera Cluster for MySQL or MariaDB

Planet MySQL - Thu, 06/28/2018 - 02:26

Galera Cluster comes with many notable features that are not available in standard MySQL replication (or Group Replication); automatic node provisioning, true multi-master with conflict resolutions and automatic failover. There are also a number of limitations that could potentially impact cluster performance. Luckily, if you are not aware of these, there are workarounds. And if you do it right, you can minimize the impact of these limitations and improve overall performance.

We have previously covered many tips and tricks related to Galera Cluster, including running Galera on AWS Cloud. This blog post distinctly dives into the performance aspects, with examples on how to get the most out of Galera.

Replication Payload

A bit of introduction - Galera replicates writesets during the commit stage, transferring writesets from the originator node to the receiver nodes synchronously through the wsrep replication plugin. This plugin will also certify writesets on the receiver nodes. If the certification process passes, it returns OK to the client on the originator node and will be applied on the receiver nodes at a later time asynchronously. Else, the transaction will be rolled back on the originator node (returning error to the client) and the writesets that have been transferred to the receiver nodes will be discarded.

A writeset consists of write operations inside a transaction that changes the database state. In Galera Cluster, autocommit is default to 1 (enabled). Literally, any SQL statement executed in Galera Cluster will be enclosed as a transaction, unless you explicitly start with BEGIN, START TRANSACTION or SET autocommit=0. The following diagram illustrates the encapsulation of a single DML statement into a writeset:

For DML (INSERT, UPDATE, DELETE..), the writeset payload consists of the binary log events for a particular transaction while for DDLs (ALTER, GRANT, CREATE..), the writeset payload is the DDL statement itself. For DMLs, the writeset will have to be certified against conflicts on the receiver node while for DDLs (depending on wsrep_osu_method, default to TOI), the cluster cluster runs the DDL statement on all nodes in the same total order sequence, blocking other transactions from committing while the DDL is in progress (see also RSU). In simple words, Galera Cluster handles DDL and DML replication differently.

Round Trip Time

Generally, the following factors determine how fast Galera can replicate a writeset from an originator node to all receiver nodes:

  • Round trip time (RTT) to the farthest node in the cluster from the originator node.
  • The size of a writeset to be transferred and certified for conflict on the receiver node.

For example, if we have a three-node Galera Cluster and one of the nodes is located 10 milliseconds away (0.01 second), it's very unlikely you might be able to write more than 100 times per second to the same row without conflicting. There is a popular quote from Mark Callaghan which describes this behaviour pretty well:

"[In a Galera cluster] a given row can’t be modified more than once per RTT"

To measure RTT value, simply perform ping on the originator node to the farthest node in the cluster:

$ ping # the farthest node

Wait for a couple of seconds (or minutes) and terminate the command. The last line of the ping statistic section is what we are looking for:

--- ping statistics --- 65 packets transmitted, 65 received, 0% packet loss, time 64019ms rtt min/avg/max/mdev = 0.111/0.431/1.340/0.240 ms

The max value is 1.340 ms (0.00134s) and we should take this value when estimating the minimum transactions per second (tps) for this cluster. The average value is 0.431ms (0.000431s) and we can use to estimate the average tps while min value is 0.111ms (0.000111s) which we can use to estimate the maximum tps. The mdev means how the RTT samples were distributed from the average. Lower value means more stable RTT.

Hence, transactions per second can be estimated by dividing RTT (in second) into 1 second:


  • Minimum tps: 1 / 0.00134 (max RTT) = 746.26 ~ 746 tps
  • Average tps: 1 / 0.000431 (avg RTT) = 2320.19 ~ 2320 tps
  • Maximum tps: 1 / 0.000111 (min RTT) = 9009.01 ~ 9009 tps

Note that this is just an estimation to anticipate replication performance. There is not much we can do to improve this on the database side, once we have everything deployed and running. Except, if you move or migrate the database servers closer to each other to improve the RTT between nodes, or upgrade the network peripherals or infrastructure. This would require maintenance window and proper planning.

Chunk Up Big Transactions

Another factor is the transaction size. After the writeset is transferred, there will be a certification process. Certification is a process to determine whether or not the node can apply the writeset. Galera generates MD5 checksum pseudo keys from every full row. The cost of certification depends on the size of the writeset, which translates into a number of unique key lookups into the certification index (a hash table). If you update 500,000 rows in a single transaction, for example:

# a 500,000 rows table mysql> UPDATE mydb.settings SET success = 1;

The above will generate a single writeset with 500,000 binary log events in it. This huge writeset does not exceed wsrep_max_ws_size (default to 2GB) so it will be transferred over by Galera replication plugin to all nodes in the cluster, certifying these 500,000 rows on the receiver nodes for any conflicting transactions that are still in the slave queue. Finally, the certification status is returned to the group replication plugin. The bigger the transaction size, the higher risk it will be conflicting with other transactions that come from another master. Conflicting transactions waste server resources, plus cause a huge rollback to the originator node. Note that a rollback operation in MySQL is way slower and less optimized than commit operation.

The above SQL statement can be re-written into a more Galera-friendly statement with the help of simple loop, like the example below:

(bash)$ for i in {1..500}; do \ mysql -uuser -ppassword -e "UPDATE mydb.settings SET success = 1 WHERE success != 1 LIMIT 1000"; \ sleep 2; \ done

The above shell command would update 1000 rows per transaction for 500 times and wait for 2 seconds between executions. You could also use a stored procedure or other means to achieve a similar result. If rewriting the SQL query is not an option, simply instruct the application to execute the big transaction during a maintenance window to reduce the risk of conflicts.

For huge deletes, consider using pt-archiver from the Percona Toolkit - a low-impact, forward-only job to nibble old data out of the table without impacting OLTP queries much.

Parallel Slave Threads

In Galera, the applier is a multithreaded process. Applier is a thread running within Galera to apply the incoming write-sets from another node. Which means, it is possible for all receivers to execute multiple DML operations that come right from the originator (master) node simultaneously. Galera parallel replication is only applied to transactions when it is safe to do so. It improves the probability of the node to sync up with the originator node. However, the replication speed is still limited to RTT and writeset size.

To get the best out of this, we need to know two things:

  • The number of cores the server has.
  • The value of wsrep_cert_deps_distance status.

The status wsrep_cert_deps_distance tells us the potential degree of parallelization. It is the value of the average distance between highest and lowest seqno values that can be possibly applied in parallel. You can use the wsrep_cert_deps_distance status variable to determine the maximum number of slave threads possible. Take note that this is an average value across time. Hence, in order get a good value, you have to hit the cluster with writes operations through test workload or benchmark until you see a stable value coming out.

To get the number of cores, you can simply use the following command:

$ grep -c processor /proc/cpuinfo 4

Ideally, 2, 3 or 4 threads of slave applier per CPU core is a good start. Thus, the minimum value for the slave threads should be 4 x number of CPU cores, and must not exceed the wsrep_cert_deps_distance value:

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cert_deps_distance'; +--------------------------+----------+ | Variable_name | Value | +--------------------------+----------+ | wsrep_cert_deps_distance | 48.16667 | +--------------------------+----------+

You can control the number of slave applier threads using wsrep_slave_thread variable. Even though this is a dynamic variable, only increasing the number would have an immediate effect. If you reduce the value dynamically, it would take some time, until the applier thread exits after it finishes applying. A recommended value is anywhere between 16 to 48:

mysql> SET GLOBAL wsrep_slave_threads = 48;

Take note that in order for parallel slave threads to work, the following must be set (which is usually pre-configured for Galera Cluster):

innodb_autoinc_lock_mode=2 Galera Cache (gcache)

Galera uses a preallocated file with a specific size called gcache, where a Galera node keeps a copy of writesets in circular buffer style. By default, its size is 128MB, which is rather small. Incremental State Transfer (IST) is a method to prepare a joiner by sending only the missing writesets available in the donor’s gcache. IST is faster than state snapshot transfer (SST), it is non-blocking and has no significant performance impact on the donor. It should be the preferred option whenever possible.

IST can only be achieved if all changes missed by the joiner are still in the gcache file of the donor. The recommended setting for this is to be as big as the whole MySQL dataset. If disk space is limited or costly, determining the right size of the gcache size is crucial, as it can influence the data synchronization performance between Galera nodes.

The below statement will give us an idea of the amount of data replicated by Galera. Run the following statement on one of the Galera nodes during peak hours (tested on MariaDB >10.0 and PXC >5.6, galera >3.x):

mysql> SET @start := (SELECT SUM(VARIABLE_VALUE/1024/1024) FROM information_schema.global_status WHERE VARIABLE_NAME LIKE 'WSREP%bytes'); do sleep(60); SET @end := (SELECT SUM(VARIABLE_VALUE/1024/1024) FROM information_schema.global_status WHERE VARIABLE_NAME LIKE 'WSREP%bytes'); SET @gcache := (SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(@@GLOBAL.wsrep_provider_options,'gcache.size = ',-1), 'M', 1)); SELECT ROUND((@end - @start),2) AS `MB/min`, ROUND((@end - @start),2) * 60 as `MB/hour`, @gcache as `gcache Size(MB)`, ROUND(@gcache/round((@end - @start),2),2) as `Time to full(minutes)`; +--------+---------+-----------------+-----------------------+ | MB/min | MB/hour | gcache Size(MB) | Time to full(minutes) | +--------+---------+-----------------+-----------------------+ | 7.95 | 477.00 | 128 | 16.10 | +--------+---------+-----------------+-----------------------+

We can estimate that the Galera node can have approximately 16 minutes of downtime, without requiring SST to join (unless Galera cannot determine the joiner state). If this is too short time and you have enough disk space on your nodes, you can change the wsrep_provider_options="gcache.size=<value>" to a more appropriate value. In this example workload, setting gcache.size=1G allows us to have 2 hours of node downtime with high probability of IST when the node rejoins.

It's also recommended to use gcache.recover=yes in wsrep_provider_options (Galera >3.19), where Galera will attempt to recover the gcache file to a usable state on startup rather than delete it, thus preserving the ability to have IST and avoiding SST as much as possible. Codership and Percona have covered this in details in their blogs. IST is always the best method to sync up after a node rejoins the cluster. It is 50% faster than xtrabackup or mariabackup and 5x faster than mysqldump.

Asynchronous Slave

Galera nodes are tightly-coupled, where the replication performance is as fast as the slowest node. Galera use a flow control mechanism, to control replication flow among members and eliminate any slave lag. The replication can be all fast or all slow on every node and is adjusted automatically by Galera. If you want to know about flow control, read this blog post by Jay Janssen from Percona.

In most cases, heavy operations like long running analytics (read-intensive) and backups (read-intensive, locking) are often inevitable, which could potentially degrade the cluster performance. The best way to execute this type of queries is by sending them to a loosely-coupled replica server, for instance, an asynchronous slave.

An asynchronous slave replicates from a Galera node using the standard MySQL asynchronous replication protocol. There is no limit on the number of slaves that can be connected to one Galera node, and chaining it out with an intermediate master is also possible. MySQL operations that execute on this server won't impact the cluster performance, apart from the initial syncing phase where a full backup must be taken on the Galera node to stage the slave before establishing the replication link (although ClusterControl allows you to build the async slave from an existing backup first, before connecting it to the cluster).

GTID (Global Transaction Identifier) provides a better transactions mapping across nodes, and is supported in MySQL 5.6 and MariaDB 10.0. With GTID, the failover operation on a slave to another master (another Galera node) is simplified, without the need to figure out the exact log file and position. Galera also comes with its own GTID implementation but these two are independent to each other.

Scaling out an asynchronous slave is one-click away if you are using ClusterControl -> Add Replication Slave feature:

Take note that binary logs must be enabled on the master (the chosen Galera node) before we can proceed with this setup. We have also covered the manual way in this previous post.

The following screenshot from ClusterControl shows the cluster topology, it illustrates our Galera Cluster architecture with an asynchronous slave:

ClusterControl automatically discovers the topology and generates the super cool diagram like above. You can also perform administration tasks directly from this page by clicking on the top-right gear icon of each box.

SQL-aware Reverse Proxy Related resources  How to Benchmark Performance of MySQL & MariaDB using SysBench  Galera Cluster Comparison - Codership vs Percona vs MariaDB  Monitoring Galera Cluster for MySQL or MariaDB - Understanding metrics and their meaning

ProxySQL and MariaDB MaxScale are intelligent reverse-proxies which understand MySQL protocol and is capable of acting as a gateway, router, load balancer and firewall in front of your Galera nodes. With the help of Virtual IP Address provider like LVS or Keepalived, and combining this with Galera multi-master replication technology, we can have a highly available database service, eliminating all possible single-point-of-failures (SPOF) from the application point-of-view. This will surely improve the availability and reliability the architecture as whole.

Another advantage with this approach is you will have the ability to monitor, rewrite or re-route the incoming SQL queries based on a set of rules before they hit the actual database server, minimizing the changes on the application or client side and routing queries to a more suitable node for optimal performance. Risky queries for Galera like LOCK TABLES and FLUSH TABLES WITH READ LOCK can be prevented way ahead before they would cause havoc to the system, while impacting queries like "hotspot" queries (a row that different queries want to access at the same time) can be rewritten or being redirected to a single Galera node to reduce the risk of transaction conflicts. For heavy read-only queries like OLAP or backup, you can route them over to an asynchronous slave if you have any.

Reverse proxy also monitors the database state, queries and variables to understand the topology changes and produce an accurate routing decision to the backend servers. Indirectly, it centralizes the nodes monitoring and cluster overview without the need to check on each and every single Galera node regularly. The following screenshot shows the ProxySQL monitoring dashboard in ClusterControl:

There are also many other benefits that a load balancer can bring to improve Galera Cluster significantly, as covered in details in this blog post, Become a ClusterControl DBA: Making your DB components HA via Load Balancers.

Final Thoughts

With good understanding on how Galera Cluster internally works, we can work around some of the limitations and improve the database service. Happy clustering!

Tags:  MySQL MariaDB galera pxc performance
Categories: Web Technologies

Scale-with-Maxscale-part5 (Multi-Master)

Planet MySQL - Wed, 06/27/2018 - 19:30

This is the 5th blog in series of Maxscale blog, Below is the list of our previous blogs, Which provides deep insight for Maxscale and its use cases for different architectures.

Here we are going to discuss, using Maxscale for Multi-Master environment (M-M), in both Active-Passive and Active-Active mode

Test Environment:

Below is the detail of the environment used for testing

OS                            : Debian 8 (Jessie)
MySQL Version     : 5.7.21-20-log Percona Server (GPL)
Maxscale version : maxscale-1.4.5-1.debian.jessie.x86_64 ( GPL )
Master 1                 :
Master 2                 :


Master-Master Replication


Setting up of master-master is beyond the scope of our exercise, I will directly jump on to the configuration of maxscale with Multi-master setup

Monitor Module for M-M

Maxscale comes with a special monitor module named “MMMON”, This monitors the health of servers and set the status flag based on which the router module (Read-Write splitter) sends connections


Below is the configuration basic configuration for Multi-Master using maxscale.

[Splitter Service] type=service router=readwritesplit servers=master1,master2 user=maxscale passwd=B87D86D1025669B26FA53505F848EC9B [Splitter Listener] type=listener service=Splitter Service protocol=MySQLClient port=3306 socket=/tmp/ClusterMaster [Replication Monitor] type=monitor module=mmmon servers=master1,master2 user=maxscale passwd=B87D86D1025669B26FA53505F848EC9B monitor_interval=200 detect_stale_master=1 detect_replication_lag=true [master1] type=server address= port=3306 protocol=MySQLBackend [master2] type=server address= port=3306 protocol=MySQLBackend

Active-Active Setup:

Active-Active setup is where there is completed the balance of read & write between the servers, With this setup, I would strongly recommend having the Auto_increment_increment & Auto_increment_offset to avoid conflicting writes.

Active-Active Setup with Maxscale

Below is how it looks from Maxscale

MaxScale> list servers Servers. -------------------+-----------------+-------+-------------+-------------------- Server | Address | Port | Connections | Status -------------------+-----------------+-------+-------------+-------------------- master1 | | 3306 | 0 | Master, Running master2 | | 3306 | 0 | Master, Running -------------------+-----------------+-------+-------------+--------------------

Active Passive Setup:

Active-Passive setup is where Writes happen on one of the node and reads is distributed among the servers. To have this just enable the “read_only=1” on any of the node. Maxscale identifies this flag and starts routing only the read connections.

Next question which arises immediately is what happens when there is writer (Active) node failure?

The answer is pretty simple just disable the read-only on the passive node, You do it manually by logging the node or automate it with Maxscale by integrating it along with the fail-over script, which will be called during the time of unplanned or planned maintenance.


Active-Passive setup With Maxscale


I have just enabled the read_only on Master2, you can see the status got changed to ‘Slave’, as below.

MaxScale> list servers Servers. -------------------+-----------------+-------+-------------+-------------------- Server | Address | Port | Connections | Status -------------------+-----------------+-------+-------------+-------------------- master1 | | 3306 | 0 | Master, Running master2 | | 3306 | 0 | Slave, Running -------------------+-----------------+-------+-------------+--------------------

By setting the above Multi-master setup, we have ensured that we have a one more DB node to have the fail-over, This leaves maxscale as a single point of failure for the application, we can have an HA setup for maxscale using keepalived by having an IP switch between the nodes are if you using AWS you can go with ELB(Network) on TCP ports and balancing connection between Maxscale nodes.

Image Courtesy : Photo by Vincent van Zalinge on Unsplash


Categories: Web Technologies

Percona Monitoring and Management 1.12.0 Is Now Available

Planet MySQL - Wed, 06/27/2018 - 12:36

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

In release 1.12, we invested our efforts in the following areas:

  • Visual Explain in Query Analytics – Gain insight into MySQL’s query optimizer for your queries
  • New Dashboard – InnoDB Compression Metrics – Evaluate effectiveness of InnoDB Compression
  • New Dashboard – MySQL Command/Handler Compare – Contrast MySQL instances side by side
  • Updated Grafana to 5.1 – Fixed scrolling issues

We addressed 10 new features and improvements, and fixed 13 bugs.

Visual Explain in Query Analytics

We’re working on substantial changes to Query Analytics and the first part to roll out is something that users of Percona Toolkit may recognize – we’ve introduced a new element called Visual Explain based on pt-visual-explain.  This functionality transforms MySQL EXPLAIN output into a left-deep tree representation of a query plan, in order to mimic how the plan is represented inside MySQL.  This is of primary benefit when investigating tables that are joined in some logical way so that you can understand in what order the loops are executed by the MySQL query optimizer. In this example we are demonstrating the output of a single table lookup vs two table join:

Single Table Lookup Two Tables via INNER JOIN SELECT DISTINCT c
FROM sbtest13
AND 49907
SELECT sbtest3.c
FROM sbtest1
INNER JOIN sbtest3
ON sbtest1.id = sbtest3.id
WHERE sbtest3.c ='long-string'; InnoDB Compression Metrics Dashboard

A great feature of MySQL’s InnoDB storage engine includes compression of data that is transparently handled by the database, saving you space on disk, while reducing the amount of I/O to disk as fewer disk blocks are required to store the same amount of data, thus allowing you to reduce your storage costs.  We’ve deployed a new dashboard that helps you understand the most important characteristics of InnoDB’s Compression.  Here’s a sample of visualizing Compression and Decompression attempts, alongside the overall Compression Success Ratio graph:


MySQL Command/Handler Compare Dashboard

We have introduced a new dashboard that lets you do side-by-side comparison of Command (Com_*) and Handler statistics.  A common use case would be to compare servers that share a similar workload, for example across MySQL instances in a pool of replicated slaves.  In this example I am comparing two servers under identical sysbench load, but exhibiting slightly different performance characteristics:

The number of servers you can select for comparison is unbounded, but depending on the screen resolution you might want to limit to 3 at a time for a 1080 screen size.

New Features & Improvements
  • PMM-2519: Display Visual Explain in Query Analytics
  • PMM-2019: Add new Dashboard InnoDB Compression metrics
  • PMM-2154: Add new Dashboard Compare Commands and Handler statistics
  • PMM-2530: Add timeout flags to mongodb_exporter (thank you unguiculus for your contribution!)
  • PMM-2569: Update the MySQL Golang driver for MySQL 8 compatibility
  • PMM-2561: Update to Grafana 5.1.3
  • PMM-2465: Improve pmm-admin debug output
  • PMM-2520: Explain Missing Charts from MySQL Dashboards
  • PMM-2119: Improve Query Analytics messaging when Host = All is passed
  • PMM-1956: Implement connection checking in mongodb_exporter
Bug Fixes
  • PMM-1704: Unable to connect to AtlasDB MongoDB
  • PMM-1950: pmm-admin (mongodb:metrics) doesn’t work well with SSL secured mongodb server
  • PMM-2134: rds_exporter exports memory in Kb with node_exporter labels which are in bytes
  • PMM-2157: Cannot connect to MongoDB using URI style
  • PMM-2175: Grafana singlestat doesn’t use consistent colour when unit is of type Time
  • PMM-2474: Data resolution on Dashboards became 15sec interval instead of 1sec
  • PMM-2581: Improve Travis CI tests by addressing pmm-admin check-network Time Drift
  • PMM-2582: Unable to scroll on “_PMM Add Instance” page when many RDS instances exist in an AWS account
  • PMM-2596: Set fixed height for panel content in PMM Add Instances
  • PMM-2600: InnoDB Checkpoint Age does not show data for MySQL
  • PMM-2620: Fix balancerIsEnabled & balancerChunksBalanced values
  • PMM-2634: pmm-admin cannot create user for MySQL 8
  • PMM-2635: Improve error message while adding metrics beyond “exit status 1”
Known Issues
  • PMM-2639: mysql:metrics does not work on Ubuntu 18.04 – We will address this in a subsequent release
How to get PMM Server

PMM is available for installation using three methods:

The post Percona Monitoring and Management 1.12.0 Is Now Available appeared first on Percona Database Performance Blog.

Categories: Web Technologies