emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Capturing Per-Process Metrics with Percona Monitoring and Management (PMM)

Planet MySQL - Mon, 05/21/2018 - 08:36

In this blog post, I will show you how to use Percona Monitoring and Management (PMM) to capture per-process metrics in five minutes or less.

While Percona Monitoring and Management (PMM) captures a lot of host metrics, it currently falls short providing per-process information, such as which particular process uses a lot of CPU resources, causes Disk IO or consumes a lot of memory.

In our database performance optimization and troubleshooting practice, this information has proven quite useful in many cases: batch jobs taking much more resources than developers would estimate and misconfigured Percona XtraBackup or Percona Toolkit are among the most common offenders.

Per-process metrics information can also be very helpful when troubleshooting database software memory leaks or memory fragmentation.

You don’t know which processes cause you problems at the outset, so it is important to capture information about all of the processes (or specifically exclude the processes you do not want to capture information about) rather than capture information about selected few.

While capturing such helpful information is not available in PMM out of the box (yet), you can easily achieve it using PMM’s External Exporter support and the excellent Prometheus Process Exporter by Nick Cabatoff.

These instructions are for Debian/Ubuntu  Linux Distributions but they should work with RedHat/CentOS based versions as well – just use RPM package instead of DEB

1: Download the process exporter packages from GitHub:

wget https://github.com/ncabatoff/process-exporter/releases/download/v0.2.11/process-exporter_0.2.11_linux_amd64.deb

2: Install the package

(Note: the file will be different depending on the platform and current release.)

dpkg -i process-exporter_0.2.11_linux_amd64.deb

3: Run the Exporter

service process-exporter start

4: Register Exporter with Percona Monitoring and Management

Assuming the current node is already monitored by PMM you just need one command:

pmm-admin add external:service processes-my-host --service-port=9256 --interval=10s

This captures process metrics every 10 seconds (adjust interval if desired).

Important note: due to some internal limitations, you need to use a different service name (“processes-my-host”)  for each host. I suggest just adding the hostname to the descriptive name “processes” for simplicity.

5: Get Matching Dashboard from Grafana.com

While you can browse the data captured by the Advanced Data Exploration Dashboard, it is not any fun. I created a PMM-style dashboard and published it on Grafana.com. I based it on Nick’s original dashboard.

To add this dashboard to your PMM Server, click Dashboard Search on your PMM Server.

From there, click on “Import Dashboard”. Use 6033 as the Grafana.com Dashboard ID.

6: You’re done!

You should have data flowing, and you should be able to see the data on the graphs.

In this example, I have pt-query-digest (shown as Perl) parsing the log file and pushing MySQL Server away from memory.

Note, as you likely have many processes on the system, the graphs are designed to show only the top processes. All running processes, however, are available in the drop-down if you want to access the history for a specific process.

Let us know what you think. We are looking at how to integrate this functionality directly into Percona Monitoring and Management!

The post Capturing Per-Process Metrics with Percona Monitoring and Management (PMM) appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Learning Gutenberg: What is Gutenberg, Anyway?

CSS-Tricks - Mon, 05/21/2018 - 06:47

Gutenberg is the new React-driven SPA editing experience in WordPress. Oh wait, a string of buzzwords doesn't count for a viable explanation of software? We’re going to unpack that string of buzzwords as we explain what Gutenberg is.

Article Series:
  1. Series Introduction
  2. What is Gutenberg, Anyway? (This Post)
  3. A Primer with create-guten-block (Coming Soon!)
  4. Modern JavaScript Syntax (Coming Soon!)
  5. React 101 (Coming Soon!)
  6. Setting up a Custom webpack (Coming Soon!)
  7. A Custom "Card" Block (Coming Soon!)

First, a before-and-after screenshot might drive home the idea for you:

On the left, the editor as it exists pre-Gutenberg. On the right, with Gutenberg enabled via plugin. Buzzword #1: Editing Experience

Gutenberg is a redesign of the WordPress WYSIWYG editor.

The editor in WordPress has traditionally been that single WYSIWYG field, (the blob of content) that saves the entire content of the post to the post_content column in the wp_posts database table. Gutenberg doesn’t change this: it saves all the post content to the post_content table to be retrieved by calling the_content() in our PHP templates.

So yeah, Gutenberg is just a redesign of the editor... but it's a travesty to call Gutenberg just a redesign of the editor! It is so much more than that!

Gutenberg introduces an entirely new way of thinking about content in WordPress. It not only gives developers a native way to handle content in chunks (we’ll actually be referring to them as blocks, which is their official name), it enables end-users to create rich, dynamic page layouts with WordPress out of the box. Without Gutenberg, this would likely require a hoard of third-party plugins (read: shortcode vomit and server strain) as is currently the case with what will be known as the WordPress “Classic” editor.

For the purposes of this article and our learning, know this: Gutenberg does not change how WordPress functions at its very core. It is 99% a change to the editor's user interface. Hitting "Publish" still saves content to post_content—there's just a lot more opportunity to craft the user experience of writing and editing content.

Buzzword #2: SPA

Translation: Gutenberg is a Single Page Application within WordPress.

In a Single Page Application (SPA) an application runs on a single page load, and subsequent interactions are driven 100% by JavaScript and Ajax requests.

Note the "within WordPress' part of the above statement—Gutenberg does not (currently) impact any part of WordPress beyond where one would normally see the editor. In essence, Gutenberg replaces the WYSIWYG, TinyMCE editor with an SPA.

This means that writing content in Gutenberg is fast and satisfying, and I wish I could say my (limited) experience with developing blocks has been the same. For our journey, this SPA business means slow page loads during development (we are loading a lot of JavaScript), obscure and frustrating console errors, and npm modules for days.

Of course, that’s a glass-half-empty look at the situation. Glass half-full? It feels really good when something works. The wins are few and far between at first, but stick with it!

Buzzword #3: React-driven

Translation: Yes, Gutenberg is built in React. That’s probably not going to change anytime soon, if ever.

There was some #hotdrama back in September-October of 2017 about choosing a framework for WordPress since Facebook added a patent clause to React. But after major backlash from open source communities, including WordPress (which, ahem, powers ~30% of websites), Facebook changed it back.

As of January this year (2018) there were still whispers that a framework decision for core is pending, but until we get official word from the powers that be, let’s look at the facts.

  • Gutenberg is in active development.
  • Themes and plugins are in active development preparing for Gutenberg.
  • All of that is happening in React.

I’m putting my money on React, and if that changes, great! I’ll have React on my resume and get on to learning whatever its replacement may be.

Important Resources
  1. The GitHub repo — This is mainly for searching issues when they come up during development to see if they have already been filed.
  2. The Gutenberg Handbook — This is where the official Gutenberg documentation lives.
Beware!

While the Gutenberg project is far enough along that there will not be any major infrastructural changes, we must remember that Gutenberg is brand new software in active development and anything could happen. Why not be on the front lines? This is exciting stuff.

The WordPress community has already begun to take up the task of creating tools, tutorials, case studies, courses, and community-contributed resources.

That being said, you may search a question that hasn’t been asked before. At some point, you will probably find yourself reading the Gutenberg source code for documentation, and you may find the existing documentation to be out of date. You may test out an example from a two-week-old tutorial only to find it uses a deprecated API method.

If you do come across something you feel is not as it should be, research and report the issue on GitHub, ask about it in the #core-editor channel of the WordPress Slack, or alert the author of the aforementioned out-of-date blog post. And if it’s documentation that’s a problem, you can always fix it yourself!

Setting Up

Now, I’d like for you to set up a development environment so that we can continue this discussion with more context. Do these things:

  1. Set up a local WordPress install, however you choose to do that—this can be an existing project, or a fresh install. Just something that can be very broken for demo purposes.
  2. Activate a relatively simple theme. Twenty Seventeen will work just fine. The only thing your theme needs to have is a call to the_content(); in its post and page templates, and most out-of-the-box themes do.
  3. Install Gutenberg. You can find it in the plugin repository. This version is quite far along and updated regularly, so we don’t need to worry about working from a development build.
  4. Activate the Gutenberg plugin and create a new post.

If you haven’t run a WordPress site locally before, check out this guide. We strongly recommend that you download something like MAMP, XAMPP, or similar if this is your first time.

Let's Explore

You should have something a lot like this:

A mostly blank post in Gutenberg. Typing / reveals the block selector.

As in the above image, typing a / reveals a list of blocks. Erase that /<code> and on the right you should see a <code>+ that, when clicked, will reveal an additional listing of blocks, organized by category.

A different view of blocks, this time organized by category.

Notice the panel on the right with tabs for both "Document" and "Block." The "Block" tab is known as the block inspector, and gives us a nice area for block specific options like so, for the paragraph block:

The block inspector reveals customization options for a specific block.

I recommend playing around with your post for a few more minutes and testing out the different types of blocks. Keep an eye on that inspector sidebar to see what customization options each block offers. All of these blocks you see now are included in a a library of core blocks and can be a reference point for creating custom blocks (which we will do in the next part of this series!).

After you’ve created a post populated with five-ish blocks of varying kinds, save and publish the post, then take a look at it from the front end. Nice!

Now, let’s do something crazy. Deactivate Gutenberg from the Plugins screen. Go back to the edit screen for that post, and you should see something like this, in the "Classic" editor:

Blocks, as they are saved in the database.

What are all these comments? Those are blocks! They map one to one with the blocks you chose when creating the post.

Under the hood, blocks are chunks of HTML identifiable by their surrounding HTML comments. In the example above, you’ll notice a few of the block names, i.e. wp:block-name are accompanied by JSON, such as the second paragraph block. When I specified some customization options in the block inspector, these were stored along with the block identifier so that now, when I reactivate Gutenberg, my settings won’t be lost; they are saved right alongside the block definition itself.

Before you reactivate the plugin, however, view the post again from the front end. Did you lose some styling? I did.

Go ahead and reactivate the Gutenberg plugin, and double check the editor to make sure your blocks are still intact. Because of those HTML comments, they should be!

Now, let’s poke around and see where these styles are coming from. When I inspect my paragraph block from the front end, I see a few inline styles—a result of options selected in the block inspector—as well as some structural styles from what appears to be a style.css file enqueued by the Gutenberg plugin (after 5.0 is released, remember, this will be just from WordPress, not a plugin).

Inspecting the styles from the front end, we see there are styles coming from Gutenberg itself

Now, try inspecting that paragraph block from the editor view. You should see the same set of inline styles and the same p.has-paragraph selector applied to the block from the editor view. Interesting!

This introduces the fact that blocks can have shared styles between the theme’s front-end and the editor. Pre-Gutenberg, we had theme, or front-end, styles, and we could separately enqueue an editor-style.css to add CSS to the WordPress admin area. Now, however, our blocks share styles between the front-end and the editor view pretty much by default.

Glass half-full perspective: this allows us to craft a content view for publishers that is much closer to what they will see on the front end. They will no longer need to hit the Preview button a dozen or more times to view small content changes before publishing.

Glass half-empty perspective: This could create more work for us—as designers and developers, we now have an editor experience to create in addition to the front-facing website! And we have to figure out which styles are shared between the two. However, I would argue that with a well thought-out design and front end strategy, this won’t be as much of an issue as you’d think.

Umm... Where's the JavaScript?

We’re going to need JavaScript to create a block in Gutenburg. So let’s make a block already! That's the focus of the next post in this series.

Article Series:
  1. Series Introduction
  2. What is Gutenberg, Anyway? (This Post)
  3. A Primer with create-guten-block (Coming Soon!)
  4. Modern JavaScript Syntax (Coming Soon!)
  5. React 101 (Coming Soon!)
  6. Setting up a Custom webpack (Coming Soon!)
  7. A Custom "Card" Block (Coming Soon!)

The post Learning Gutenberg: What is Gutenberg, Anyway? appeared first on CSS-Tricks.

Categories: Web Technologies

Learning Gutenberg: Series Introduction

CSS-Tricks - Mon, 05/21/2018 - 06:00

Hey CSS-Tricksters! &#x1f44b; We have a special long-form series we’re kicking off here totally dedicated to Gutenberg, a major change to the WordPress editor. I’ve invited a dynamic duo of authors to bring you this series, which will bring you up to speed on what Gutenberg is, what it can do for your site, and how you can actually develop for it.

Who this is for

This series is more for developers who are curious about this new world and wanna get started working with it. This series isn’t necessarily for site owners who want to know how it’s going to affect their site or who are worried about it for any reason.

It’s clear there is a lot of possibility with Gutenberg. Yes, it’s aiming to be a better editing experience, but it also likely to change how people think of what’s possible with WordPress. With the custom “blocks” that content will be built with (don’t worry, we’ll get into all this) it’s almost the WordPress editor becomes a drag-and-drop site builder.

I’d recommend listening to this episode of ShopTalk to hear more about all this potential.

How this series came to be

Funny story really. It just so happened that two authors I work with regularly were both independently creating long form series about this exact topic at the exact same time. Shame on me, because I didn’t put 2 and 2 together until both of them had made significant progress. It would have been too weird to release both series independently, so instead, we all got together and worked out a way to combine and rework the series and make a single series that’s the best of both.

Here we are with that!

Prerequisites

You’ll probably do best with this series with these skills:

  • WordPress development concepts such as actions, filters and template tags
  • Foundational (not deep) JavaScript knowledge e.g. understanding the difference between an object and an array and what callback functions are.
  • Using the command line to navigate directories and run build tasks

If you've written your own npm modules, feel totally comfortable writing commit messages in Vim, or you're an old-hat at React, then this series might move a little slowly for you.

Meet your teachers

From here on out, I’m passing this series over to the authors and your Gutenberg teachers: Lara Schenck and Andy Bell.

A bit from Lara

Hi! Back in 2015, Matt Mullenweg, the co-creator of WordPress, instructed a room full of over a thousand WordPress developers, business owners, end users, designers, and who knows how many more live-stream and after-the-fact viewers to "learn JavaScript deeply."

I was in that room, and I remember thinking:

I can do that! I'm not sure why, jQuery has suited me just fine and I really like CSS the best, but okay, Matt, I'll put that on my list of things to do...
—Me, in 2015

The thing is, it's entirely possible to build pretty awesome and robust WordPress themes without touching a byte of JavaScript. Needless to say, I didn't get around to learning JavaScript deeply for a good year and a half, so now I'm playing catch up... and I'm sure I'm not the only WordPress developer in that boat.

Back to the present. Not only has it continued to take over the web in a very big way, JavaScript—in all it's bundled, destructured, spread-operator-ed glory—has made its way into the inner workings of WordPress by way of the editor revamp named Gutenberg.

If you're reading this, I'm assuming you've at least heard of Gutenberg, but if not, here's a rundown by Chris from a few months back that should help orient you. I'm also assuming you feel some mixture of ignorance, intrigue, excitement, and panic when thinking about Gutenberg and what it means for WordPress, ergo ~30% of websites. I'm here to tell you to scrap the panic and keep the excitement and intrigue because this is very exciting for both us developers and WordPress itself.

I predict that, in the coming years, the adoption of Gutenberg will cause WordPress to outgrow its unfortunate reputation as a legacy, insecure, developer-unfriendly, blogging engine. Case in point:

I think I’m gonna make my personal website a WordPress site again. Coming full circle after 6 years &#x1f60a;

— Rach Smith &#x1f308; (@rachsmithtweets) February 20, 2018

I’ll jump in here too and say that Gutenberg has pulled me back into WordPress because the stack is more friendly to me, a front-end developer. Before, I would fumble through customizations, whereas now I’ll very happily roll out custom blocks. To me, Gutenberg is more than an editor—it’s a movement and that makes folks like me really excited about the future of WordPress.

Little ol' WordPress is catching up, and it's bringing with it all the JavaScript goodness you could possibly imagine: Gutenberg is a React-driven SPA editing experience poised to be released in WordPress Core later this year, in version 5.0. It will flip the WordPress ecosystem upside down and, hopefully, make way for a new generation of themes and plugins powered by blocks—a phenomenon other content management systems have embraced for some time.

This means two things for the developers like myself who neglected to “learn JavaScript deeply” back in 2015:

  1. There's still time.
  2. We now have a very specific, real-world context for our learning.

This is great news! I’ll speak for myself, but I find it much easier to learn a technology when I have a specific application for it. With that, I invite you to join me on a journey to "learn Gutenberg deeply" and, in lieu of that, a solid chunk of JavaScript and React.

This topic of this series is not original. The WordPress community has already risen to the task of creating excellent resources for Gutenberg development, and I recommend you read and watch as many as you can! While this series covers some of the same information, the goal here is to structure the content in a way where you, reader, have to work a little bit to get the answers—sort of like class or guidebook with questions and homework.

Yes, we will create a block, but along the way we'll stray from the block-building and learn about the environment setup, about APIs, and other development concepts and terminology that took me a good while to understand. At the end of the day, I hope to help you feel more confident experimenting with code when there isn't airtight documentation or loads of Stack Overflow posts to fall back on.

A bit from Andy

The new, upcoming WordPress editor is bringing a wealth of opportunity to content producers and developers alike, but with that, it brings a whole new JavaScript powered stack to learn. For some, that’s causing some worry, so I thought I’d get involved and try and help. We’re going to dive into some core JavaScript concepts and build out a custom block that will power a classic design pattern—a card. This design pattern gives us plenty to think about, such as variable content and media.

Before we dive into that, we’re going to look at the new JavaScript stack and the tools that it gives us. We’re also going to take a look a React component to give us a primer of reactive, state-driven JavaScript and JSX.

With all of the knowledge and tools that we worked with, at the end of this tutorial series, we’ll have a solid custom card block. This can also work as a baseline for so many other types of block for maintaining website content.

Before we dig in, let’s get our machines set up with the right tools to do the job. Because we’re using the modern JavaScript stack, there are some browsers that don’t yet support the language features. Because of this, we’ll be using some Node JS-based tools to compile this code into a more cross-compatible form.

Get Node JS running

Our setups vary wildly, so I’m going to point you to the official Node JS website for this, where you’ll find handy installers. There’s a really useful page that explains how you can use popular package managers too, here.

Get your terminal running

We’re going to be using our terminals to run some commands later in the series, so get yours setup. I like iTerm, but that’s only for Mac so here’s some resources for both Mac and Windows users:

  • Mac: You can use the default terminal which is located at Applications > Terminal
  • Windows: You can either get the Linux subsystem running, use command prompt or get software like Hyper
Get a local WordPress instance running

Because we’re using the modern JavaScript stack, it’s important to get an instance of WordPress running locally on your machine. If you haven’t done that before, check out this guide. I strongly recommend that you download something like MAMP, XAMP or similar if you’re new to this.

Once you have a local instance running, have a theme ready to play with as we’ll be diving into a little bit of theme code later on.

Modern JavaScript can be daunting if you’re not working with it day-to-day, so together, we’re going to dig in to some core elements of the modern version of JavaScript: known commonly as ES6.

We’re then going to take that knowledge and use it to build a React component. React is an incredibly popular framework, but again, it’s quite daunting, so we’re going to dig in together to try and reduce that. The Gutenberg JavaScript setup very much resembles React, so it’s also an exercise in getting familiar with component based, reactive JavaScript frameworks in general.

Once we’ve covered that, I’m hoping you’re going to be feeling pretty awesome, so we’re going to take that momentum and dive into the sometimes dreaded, webpack, which is a tool we’re going to use to process our JavaScript and smoosh it together. We’re also going to get Babel running, which will magically turn our ES6 code into better-supported ES5 code.

At this point, I think you’ll be full of confidence in this stack, so we’re going to get stuck into the main course—we’ll build out our custom card block.

Sound good? Awesome. Let’s dive in!

Article Series:
  1. Series Introduction (This Post)
  2. What is Gutenberg, Anyway?
  3. A Primer with create-guten-block (Coming Soon!)
  4. Modern JavaScript Syntax (Coming Soon!)
  5. React 101 (Coming Soon!)
  6. Setting up a Custom webpack (Coming Soon!)
  7. A Custom "Card" Block (Coming Soon!)

The post Learning Gutenberg: Series Introduction appeared first on CSS-Tricks.

Categories: Web Technologies

Experience, Not Conversion, is the Key to the Switching Economy

Planet MySQL - Mon, 05/21/2018 - 06:00

In a world increasingly defined by instant-gratification, the demand for positive and direct shopping experiences has risen exponentially. Today’s always-on customers are drawn to the most convenient products and services available. As a result, we are witnessing higher customer switching rates, with consumers focusing more on convenience than on branding, reputation, or even on price.  

In this switching economy – where information and services are always just a click away –  we tend to reach for what suits our needs in the shortest amount of time. This shift in decision making has made it harder than ever for businesses to build loyalty among their customers and to guarantee repeat purchases. According to recent research, only 1 in 5 consumers now consider it a hassle to switch between brands, while a third would rather shop for better deals than stay loyal to a single organization. 

What's Changed? 

The consumer mindset for one. And the switching tools available to customers have also changed. Customers now have the ability to research extensively before they purchase, with access to reviews and price comparison sites often meaning that consumers don’t even make it to a your website before being captured by a competitor. 

This poses a serious concern for those brands that have devoted their time – and marketing budgets – to building great customer experiences across their websites. 

Clearly this is not to say that on-site experiences aren’t important, but rather that they are only one part of the wider customer journey. In an environment as complex and fast moving as the switching economy, you must look to take a more omnichannel approach to experience, examining how your websites, mobile apps, customer service teams, external reviews and in-store experiences are all shaping the customers’ perceptions of your brand. 

What Still Needs to Change?

Only by getting to know your customers across all of these different channels can you future-proof your brand in the switching economy. To achieve this, you must establish a new set of metrics that go beyond website conversion. The days of conversion optimization being viewed as the secret sauce for competitive differentiation are over; now brands must recognize that high conversion rates are not necessarily synonymous with a great customer experience – or lifetime loyalty. 

Today, the real measure of success does not come from conversion, but from building a true understanding of your customers – across every touchpoint in the omnichannel journey. Through the rise of experience analytics, you finally have the tools and technologies needed to understand customers in this way, and to tailor all aspects of your brand to maximize convenience, encourage positive mindsets and pre-empt when your customers are planning to switch to a different brand. 

It is only through this additional layer of insight that businesses and brands will rebuild the notion of customer loyalty, and ultimately, overcome the challenges of the switching economy. 

Want to learn more about simplifying and improving the customer experience? Read Customer Experience Simplified: Deliver The Experience Your Customers Want to discover how to provide customer experiences that are managed as carefully as the product, the price, and the promotion of the marketing mix.

Categories: Web Technologies

Understanding Deadlocks in MySQL & PostgreSQL

Planet MySQL - Mon, 05/21/2018 - 03:16

Working with databases, concurrency control is the concept that ensures that database transactions are performed concurrently without violating data integrity.

There is a lot of theory and different approaches around this concept and how to accomplish it, but we will briefly refer to the way that PostgreSQL and MySQL (when using InnoDB) handle it, and a common problem that can arise in highly concurrent systems: deadlocks.

These engines implement concurrency control by using a method called MVCC (Multiversion Concurrency Control). In this method, when an item is being updated, the changes will not overwrite the original data, but instead a new version of the item (with the changes) will be created. Thus we will have several versions of the item stored.

One of the main advantages of this model is that locks acquired for querying (reading) data do not conflict with locks acquired for writing data, and so reading never blocks writing and writing never blocks reading.

But, if several versions of the same item are stored, which version of it will a transaction see? To answer that question we need to review the concept of transaction isolation. Transactions specify an isolation level, that defines the degree to which one transaction must be isolated from resource or data modifications made by other transactions.This degree is directly related with the locking generated by a transaction, and so, as it can be specified at transaction level, it can determine the impact that a running transaction can have over other running transactions.

This is a very interesting and long topic, although we will not go into too much details in this blog. We’d recommend the PostgreSQL and MySQL official documentation for further reading on this topic.

So, why are we going into the above topics when dealing with deadlocks? Because sql commands will automatically acquire locks to ensure the MVCC behaviour, and the lock type acquired depends on the transaction isolation defined.

There are several types of locks (again another long and interesting topic to review for PostgreSQL and MySQL) but, the important thing about them, is how they interact (most exactly, how they conflict) with each other. Why is that? Because two transactions cannot hold locks of conflicting modes on the same object at the same time. And a non minor detail, once acquired, a lock is normally held till end of transaction.

This is a PostgreSQL example of how locking types conflict with each other:

PostgreSQL Locking types conflict

And for MySQL:

MySQL Locking types conflict

X= exclusive lock         IX= intention exclusive lock
S= shared lock         IS= intention shared lock

So what happens when I have two running transactions that want to hold conflicting locks on the same object at the same time? One of them will get the lock and the other will have to wait.

So now we are in a position to truly understand what is happening during a deadlock.

What is a deadlock then? As you can imagine, there are several definitions for a database deadlock, but i like the following for its simplicity.

A database deadlock is a situation in which two or more transactions are waiting for one another to give up locks.

So for example, the following situation will lead us to a deadlock:

Deadlock example

Here, the application A gets a lock on table 1 row 1 in order to make an update.

At the same time application B gets a lock on table 2 row 2.

Now application A needs to get a lock on table 2 row 2, in order to continue the execution and finish the transaction, but it cannot get the lock because it is held by application B. Application A needs to wait for application B to release it.

But application B needs to get a lock on table 1 row 1, in order to continue the execution and finish the transaction, but it cannot get the lock because it is held by application A.

So here we are in a deadlock situation. Application A is waiting for the resource held by application B in order to finish and application B is waiting for the resource held by application A. So, how to continue? The database engine will detect the deadlock and kill one of the transactions, unblocking the other one and raising a deadlock error on the killed one.

Let's check some PostgreSQL and MySQL deadlock examples:

PostgreSQL

Suppose we have a test database with information from the countries of the world.

world=# SELECT code,region,population FROM country WHERE code IN ('NLD','AUS'); code | region | population ------+---------------------------+------------ NLD | Western Europe | 15864000 AUS | Australia and New Zealand | 18886000 (2 rows)

We have two sessions that want to make changes to the database.

The first session will modify the region field for the NLD code, and the population field for the AUS code.

The second session will modify the region field for the AUS code, and the population field for the NLD code.

Table data:

code: NLD region: Western Europe population: 15864000 code: AUS region: Australia and New Zealand population: 18886000

Session 1:

world=# BEGIN; BEGIN world=# UPDATE country SET region='Europe' WHERE code='NLD'; UPDATE 1

Session 2:

world=# BEGIN; BEGIN world=# UPDATE country SET region='Oceania' WHERE code='AUS'; UPDATE 1 world=# UPDATE country SET population=15864001 WHERE code='NLD';

Session 2 will hang waiting for Session 1 to finish.

Session 1:

world=# UPDATE country SET population=18886001 WHERE code='AUS'; ERROR: deadlock detected DETAIL: Process 1181 waits for ShareLock on transaction 579; blocked by process 1148. Process 1148 waits for ShareLock on transaction 578; blocked by process 1181. HINT: See server log for query details. CONTEXT: while updating tuple (0,15) in relation "country"

Here we have our deadlock. The system detected the deadlock and killed session 1.

Session 2:

world=# BEGIN; BEGIN world=# UPDATE country SET region='Oceania' WHERE code='AUS'; UPDATE 1 world=# UPDATE country SET population=15864001 WHERE code='NLD'; UPDATE 1

And we can check that the second session finished correctly after the deadlock was detected and the Session 1 was killed (thus, the lock was released).

To have more details we can see the log in our PostgreSQL server:

2018-05-16 12:56:38.520 -03 [1181] ERROR: deadlock detected 2018-05-16 12:56:38.520 -03 [1181] DETAIL: Process 1181 waits for ShareLock on transaction 579; blocked by process 1148. Process 1148 waits for ShareLock on transaction 578; blocked by process 1181. Process 1181: UPDATE country SET population=18886001 WHERE code='AUS'; Process 1148: UPDATE country SET population=15864001 WHERE code='NLD'; 2018-05-16 12:56:38.520 -03 [1181] HINT: See server log for query details. 2018-05-16 12:56:38.520 -03 [1181] CONTEXT: while updating tuple (0,15) in relation "country" 2018-05-16 12:56:38.520 -03 [1181] STATEMENT: UPDATE country SET population=18886001 WHERE code='AUS'; 2018-05-16 12:59:50.568 -03 [1181] ERROR: current transaction is aborted, commands ignored until end of transaction block

Here we will be able to see the actual commands that were detected on deadlock.

Download the Whitepaper Today   PostgreSQL Management & Automation with ClusterControl Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL Download the Whitepaper MySQL

To simulate a deadlock in MySQL we can do the following.

As with PostgreSQL, suppose we have a test database with information on actors and movies among other things.

mysql> SELECT first_name,last_name FROM actor WHERE actor_id IN (1,7); +------------+-----------+ | first_name | last_name | +------------+-----------+ | PENELOPE | GUINESS | | GRACE | MOSTEL | +------------+-----------+ 2 rows in set (0.00 sec)

We have two processes that want to make changes to the database.

The first process will modify the field first_name for actor_id 1, and the field last_name for actor_id 7.

The second process will modify the field first_name for actor_id 7, and the field last_name for actor_id 1.

Table data:

actor_id: 1 first_name: PENELOPE last_name: GUINESS actor_id: 7 first_name: GRACE last_name: MOSTEL

Session 1:

mysql> set autocommit=0; Query OK, 0 rows affected (0.00 sec) mysql> BEGIN; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE actor SET first_name='GUINESS' WHERE actor_id='1'; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0

Session 2:

mysql> set autocommit=0; Query OK, 0 rows affected (0.00 sec) mysql> BEGIN; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE actor SET first_name='MOSTEL' WHERE actor_id='7'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> UPDATE actor SET last_name='PENELOPE' WHERE actor_id='1';

Session 2 will hang waiting for Session 1 to finish.

Session 1:

mysql> UPDATE actor SET last_name='GRACE' WHERE actor_id='7'; ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction

Here we have our deadlock. The system detected the deadlock and killed session 1.

Session 2:

mysql> set autocommit=0; Query OK, 0 rows affected (0.00 sec) mysql> BEGIN; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE actor SET first_name='MOSTEL' WHERE actor_id='7'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> UPDATE actor SET last_name='PENELOPE' WHERE actor_id='1'; Query OK, 1 row affected (8.52 sec) Rows matched: 1 Changed: 1 Warnings: 0

As we can see in the error, as we saw for PostgreSQL, there is a deadlock between both processes.

For more details we can use the command SHOW ENGINE INNODB STATUS\G:

mysql> SHOW ENGINE INNODB STATUS\G ------------------------ LATEST DETECTED DEADLOCK ------------------------ 2018-05-16 18:55:46 0x7f4c34128700 *** (1) TRANSACTION: TRANSACTION 1456, ACTIVE 33 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1 MySQL thread id 54, OS thread handle 139965388506880, query id 15876 localhost root updating UPDATE actor SET last_name='PENELOPE' WHERE actor_id='1' *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1456 lock_mode X locks rec but not gap waiting Record lock, heap no 2 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0001; asc ;; 1: len 6; hex 0000000005af; asc ;; 2: len 7; hex 2d000001690110; asc - i ;; 3: len 7; hex 4755494e455353; asc GUINESS;; 4: len 7; hex 4755494e455353; asc GUINESS;; 5: len 4; hex 5afca8b3; asc Z ;; *** (2) TRANSACTION: TRANSACTION 1455, ACTIVE 47 sec starting index read, thread declared inside InnoDB 5000 mysql tables in use 1, locked 1 3 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1 MySQL thread id 53, OS thread handle 139965267871488, query id 16013 localhost root updating UPDATE actor SET last_name='GRACE' WHERE actor_id='7' *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1455 lock_mode X locks rec but not gap Record lock, heap no 2 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0001; asc ;; 1: len 6; hex 0000000005af; asc ;; 2: len 7; hex 2d000001690110; asc - i ;; 3: len 7; hex 4755494e455353; asc GUINESS;; 4: len 7; hex 4755494e455353; asc GUINESS;; 5: len 4; hex 5afca8b3; asc Z ;; *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1455 lock_mode X locks rec but not gap waiting Record lock, heap no 202 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0007; asc ;; 1: len 6; hex 0000000005b0; asc ;; 2: len 7; hex 2e0000016a0110; asc . j ;; 3: len 6; hex 4d4f5354454c; asc MOSTEL;; 4: len 6; hex 4d4f5354454c; asc MOSTEL;; 5: len 4; hex 5afca8c1; asc Z ;; *** WE ROLL BACK TRANSACTION (2)

Under the title "LATEST DETECTED DEADLOCK", we can see details of our deadlock.

To see the detail of the deadlock in the mysql error log, we must enable the option innodb_print_all_deadlocks in our database.

mysql> set global innodb_print_all_deadlocks=1; Query OK, 0 rows affected (0.00 sec)

MySQL Log Error:

2018-05-17T18:36:58.341835Z 12 [Note] InnoDB: Transactions deadlock detected, dumping detailed information. 2018-05-17T18:36:58.341869Z 12 [Note] InnoDB: *** (1) TRANSACTION: TRANSACTION 1812, ACTIVE 42 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1 MySQL thread id 11, OS thread handle 140515492943616, query id 8467 localhost root updating UPDATE actor SET last_name='PENELOPE' WHERE actor_id='1' 2018-05-17T18:36:58.341945Z 12 [Note] InnoDB: *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1812 lock_mode X locks rec but not gap waiting Record lock, heap no 204 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0001; asc ;; 1: len 6; hex 000000000713; asc ;; 2: len 7; hex 330000016b0110; asc 3 k ;; 3: len 7; hex 4755494e455353; asc GUINESS;; 4: len 7; hex 4755494e455353; asc GUINESS;; 5: len 4; hex 5afdcb89; asc Z ;; 2018-05-17T18:36:58.342347Z 12 [Note] InnoDB: *** (2) TRANSACTION: TRANSACTION 1811, ACTIVE 65 sec starting index read, thread declared inside InnoDB 5000 mysql tables in use 1, locked 1 3 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1 MySQL thread id 12, OS thread handle 140515492677376, query id 9075 localhost root updating UPDATE actor SET last_name='GRACE' WHERE actor_id='7' 2018-05-17T18:36:58.342409Z 12 [Note] InnoDB: *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1811 lock_mode X locks rec but not gap Record lock, heap no 204 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0001; asc ;; 1: len 6; hex 000000000713; asc ;; 2: len 7; hex 330000016b0110; asc 3 k ;; 3: len 7; hex 4755494e455353; asc GUINESS;; 4: len 7; hex 4755494e455353; asc GUINESS;; 5: len 4; hex 5afdcb89; asc Z ;; 2018-05-17T18:36:58.342793Z 12 [Note] InnoDB: *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 23 page no 3 n bits 272 index PRIMARY of table `sakila`.`actor` trx id 1811 lock_mode X locks rec but not gap waiting Record lock, heap no 205 PHYSICAL RECORD: n_fields 6; compact format; info bits 0 0: len 2; hex 0007; asc ;; 1: len 6; hex 000000000714; asc ;; 2: len 7; hex 340000016c0110; asc 4 l ;; 3: len 6; hex 4d4f5354454c; asc MOSTEL;; 4: len 6; hex 4d4f5354454c; asc MOSTEL;; 5: len 4; hex 5afdcba0; asc Z ;; 2018-05-17T18:36:58.343105Z 12 [Note] InnoDB: *** WE ROLL BACK TRANSACTION (2)

Taking into account what we have learned above about why deadlocks happen, you can see that there is not much we can do on the database side to avoid them. Anyway, as DBAs it is our duty to actually catch them, analyze them, and provide feedback to the developers.

The reality is that these errors are particular to each application, so you will need to check them one by one and there is not guide to tell you how to troubleshoot this. Keeping this in mind, there are some things you can look for.

Search for long running transactions. As the locks are usually held until the end of a transaction, the longer the transaction , the longer the locks over the resources. If it is possible, try to split long running transactions into smaller/faster ones.

Sometimes it is not possible to actually split the transactions, so the work should focus on trying to execute those operations in a consistent order each time, so transactions form well-defined queues and do not deadlock.

One workaround that you can also propose is to add retry logic into the application (of course, try to solve the underlying issue first) in a way that, if a deadlock happens, the application will to run the same commands again.

Check the isolation levels used, sometimes you try by changing them. Look for commands like SELECT FOR UPDATE, and SELECT FOR SHARE, as they generate explicit locks, and evaluate if they are really needed or you can work with an older snapshot of the data. One thing you can try if you cannot remove these commands is using a lower isolation level such as READ COMMITTED.

Of course, always add well-chosen indexes to your tables. Then your queries need scan fewer index records and consequently set fewer locks.

Related resources  Become a MySQL DBA  Become a PostgreSQL DBA

On a higher level, as a DBA you can take some precautions to minimize locking in general. For naming one example, in this case for PostgreSQL, you can avoid adding a default value in the same command that you will add a column. Altering a table will get a really aggressive lock, and setting a default value for it will actually update the existing rows that have null values, making this operation take really long. So if you split this operation into several commands, adding the column, adding the default, updating the null values, you will minimize the locking impact.

Of course there are tons of tips like this that the DBAs get with the practice (creating indexes concurrently, create the pk index separately before adding the pk,and so on), but the important thing is to learn and understand this "way of thinking" and always to minimize the lock impact of the operations we are doing.

Tags:  deadlock locking PostgreSQL MySQL
Categories: Web Technologies

What is JavaScript? Creator Brendan Eich explains

InfoWorld JavaScript - Mon, 05/21/2018 - 03:00
Brendan Eich, creator of the JavaScript programming language, explains how the language is used, and why it's still a favorite among programmers for its ease of use.
Categories: Web Technologies

全国のOTNおすすめ技術者向けセミナー&イベント(2018年6月)

Planet MySQL - Mon, 05/21/2018 - 00:16
【東京】MySQL 8.0 新機能紹介セミナー
オプティマイザー&パフォーマンス

2018年6月1日 1:30 PM - 4:00 PM @日本オラクル 本社(東京・外苑前)

このイベントでは、MySQL 8.0の新機能の中から、オプティマイザーとパフォーマンス/スケーラビリティ改善に関係するトピックをご紹介させて頂きます。

このイベントに参加する利点
・MySQL 8.0のオプティマイザーの改善点/新機能について知る
・MySQL 8.0のパフォーマンスやスケーラビリティの改善点/新機能について知る

【大阪】Oracle Journey to Cloud セミナー
~クラウド時代における最高のデータベース基盤構築の秘訣~

2018年6月27日 2:00 PM - 4:30 PM @日本オラクル関西オフィス(大阪)

2017年秋に発表された、世界初の自律型データベース・クラウドの構想をはじめ最新版Oracle Databaseの最新情報とクラウド時代におけるデータベース基盤の最適化についてご紹介します。 さらにその最新データベース基盤を支えるためのプラットフォーム選択のポイントや導入事例をご紹介いたします。

Categories: Web Technologies

全国のOTNおすすめ技術者向けセミナー&イベント(2018年6月)

Planet MySQL - Mon, 05/21/2018 - 00:16
【東京】MySQL 8.0 新機能紹介セミナー
オプティマイザー&パフォーマンス

2018年6月1日 1:30 PM - 4:00 PM @日本オラクル 本社(東京・外苑前)

このイベントでは、MySQL 8.0の新機能の中から、オプティマイザーとパフォーマンス/スケーラビリティ改善に関係するトピックをご紹介させて頂きます。

このイベントに参加する利点
・MySQL 8.0のオプティマイザーの改善点/新機能について知る
・MySQL 8.0のパフォーマンスやスケーラビリティの改善点/新機能について知る

【大阪】Oracle Journey to Cloud セミナー
~クラウド時代における最高のデータベース基盤構築の秘訣~

2018年6月27日 2:00 PM - 4:30 PM @日本オラクル関西オフィス(大阪)

2017年秋に発表された、世界初の自律型データベース・クラウドの構想をはじめ最新版Oracle Databaseの最新情報とクラウド時代におけるデータベース基盤の最適化についてご紹介します。 さらにその最新データベース基盤を支えるためのプラットフォーム選択のポイントや導入事例をご紹介いたします。

Categories: Web Technologies

How to Fix Magento Login Issues with Cookies and Sessions - SitePoint PHP

Planet PHP - Sun, 05/20/2018 - 23:00

This article was created in partnership with Ktree. Thank you for supporting the partners who make SitePoint possible.

In this article are looking at how Magento cookies can create issues with the login functionality of both the customer-facing front-end and admin back-end, the reason it occurs and how it should be resolved.

This is also known as the looping issue, as the screen redirects itself to the same screen, even though the username and password is correct.

A script is provided at the end of the article which can help detect a few of the issues. Feel free to use and modify as per your needs.

What is a Cookie?

A cookie is a piece of text that a web server can store on a user's hard drive, and can also later retrieve it. Magento uses cookies in Cart & Backend Admin functionalities, and they may be the source of a few problems when unable to login to Magento.

What is a Session?

A session is an array variable on the server side, which stores information to be used across multiple pages. For example, items added to the cart are typically saved in sessions, and when the user browses the checkout page they are read from the session.

Sessions are identified by a unique ID. Its name changes depemnding on the programming language — in PHP it is called a 'PHP Session ID'. As you might have guessed, the same PHP Session ID needs to be stored as a cookie in the client browser to relate.

Magento's storage of Sessions

Magento can store sessions via multiple session providers and this can be configured in the Magento config file at app/etc/local.xml. These session providers can be chosen here.

File

<session_save><![CDATA[files</session_save> <session_save_path> <![CDATA[/tmp/session </session_save_path>

Database

Allowing sessions to store themselves in the database is done in /app/etc/local.xml by adding <session_save><![CDATA[db</session_save>.

Magento applications store sessions in the Core\_session table.

Redis

<session_save>db</session_save> <redis_session> <host>127.0.0.1</host> <port>6379</port> </redis_session>

MemCache

session_save><![CDATA[memcache</session_save> <session_save_path> <![CDATA[tcp://localhost:11211?persistent=1&weight=2&timeout=10&retry_interval=10 </session_save_path> Magento Usage

Magento uses two different cookies named 'frontend' and 'adminhtml'. The first one is created when any page is browsed. The same cookie is also updated whenever the customer logs in, and the next one is created when a backend user is logged in. You can check whether the cookies have been created by clicking Inspect Element > Application, as in the below picture (from Chrome):

Cookies are configured in Magento via the Configuration admin menu - System > Configuration > General > Web.

Problem: Login Fails & Redirects to Login Page

If you haven't experienced this problem, then you haven't worked with Magento long enough!

This is how it typically happens: when you login by entering your username and password, you will be redirected to the same login page and URL, and your browser is appended with nonce id. This happens for both the customer front-end and the Magento back-end login.

Let's look at a few reasons why this happens, and how we should resolve those issues.

Reason #1: Cookie domain does not match server domain

Let's say your Magento site is example.com and the cookie domain in Magento is configured as xyz.com.

In this scenario both Magento cookies will set Domain Value as xyz.com, but for validating the session Magento will consider the domain through which the site was accessed — in this case example.com. Since it won't be able to find an active ses

Truncated by Planet PHP, read more at the original (another 3840 bytes)

Categories: Web Technologies

How to set up MySQL InnoDB Cluster? Part One

Planet MySQL - Sat, 05/19/2018 - 11:09

This post is about setting up MySQL InnoDB Cluster with 5 nodes on a sandbox deployment.  Here, we focus on implementation part, the core concepts will be explained in separate posts.


Prerequisites:
  • MySQL Engine
  • MySQL Shell
  • MySQL Router
Deploying MySQL InnoDB Cluster involves the following steps:
  • Deploying MySQL Engine (Sandbox Instance)
  • Creating an InnoDB Cluster
  • Adding nodes to InnoDB Cluster
  • Configuring MySQL Router for High Availability.
  • Testing High Availability.

Deploying MySQL Engine:

If the MySQL engines are already installed on all the nodes, you can skip this step and directly move into creating an InnoDB Cluster part.


I am deploying 5 Sandbox instances (which is in-built on MySQL Shell application) on a same machine. On production system, there will be separate nodes for each MySQL Engines. Let’s begin with the deployments:


To open MySQL Shell     : Start -> cmd -> Type mysqlsh (OR) Start -> MySQL Shell


To change script mode  : \JS – JavaScript Mode | \PY – Python Mode | \SQL – SQL Mode


MySQL JS > dba.deploySandboxInstance(port)


deploySandboxInstance()module will deploy new Sandbox Instance on the mentioned port, let’s deploy the following 5 Sandbox instances:


dba.deploySandboxInstance (3307)

dba.deploySandboxInstance (3308)

dba.deploySandboxInstance (3309)

dba.deploySandboxInstance (3310)

dba.deploySandboxInstance (3311)


Sample Output:


MySQL JS > dba.deploySandboxInstance (3307)

A new MySQL sandbox instance will be created on this host in

C:\Users\rathish.kumar\MySQL\mysql-sandboxes\3307

Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks.

Please enter a MySQL root password for the new instance: ***

Deploying new MySQL instance...

Instance localhost: 3307 successfully deployed and started.

Use shell.connect('root@localhost:3307'); to connect to the instance.

MySQL JS >


To connect the deployed sandbox instance:


MySQL JS > \connect user@host:portand enter the password when prompted. (OR)

MySQL JS > shell.connect(‘user@host:port’)


Sample Output:


MySQL localhost: 3307 ssl JS > \connect root@localhost:3307

Creating a session to 'root localhost: 3307’

Enter password: ***

Fetching schema names for auto completion... Press ^C to stop.

Closing old connection...

Your MySQL connection id is 16

Server version: 8.0.11 MySQL Community Server - GPL

No default schema selected; type \use to set one.

MySQL localhost: 3307 ssl JS > \ssl

Switching to SQL mode... Commands end with;

MySQL localhost: 3307 ssl SQL > select @@port;

+--------+

| @@port |

+--------+

|   3307 |

+--------+

1 row in set (0.0006 sec)

MySQL localhost: 3307 ssl SQL >


Creating InnoDB Cluster:


To create an InnoDB cluster, connect to seed (primary) server, which contains the original data by using above method and follow the below steps:
var cluster = dba.createCluster('ClusterName')
Sample Output:
MySQL localhost:3307 ssl  JS > var cluster = dba.createCluster('DBCluster') A new InnoDB cluster will be created on instance 'root@localhost:3307'. Validating instance at localhost:3307...Instance detected as a sandbox.

Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as L-IS-RATHISH

Instance configuration is suitable.

Creating InnoDB cluster 'DBCluster' on 'root@localhost:3307'...

Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.

At least 3 instances are needed for the cluster to be able to withstand up to one server failure.
Adding nodes to InnoDB Cluster:
The secondary replication nodes will be added to cluster by using the addInstance() method.


mysql-js> cluster.addInstance('user@host:port')


Let us add the nodes, one by one:


cluster.addInstance('root@localhost:3308');

cluster.addInstance('root@localhost:3309');

cluster.addInstance('root@localhost:3310');

cluster.addInstance('root@localhost:3311');
Sample Output:
MySQL  localhost:3307 ssl  JS > cluster.addInstance('root@localhost:3311');

A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'root@localhost:3311': ***

Adding instance to the cluster ...

Validating instance at localhost:3311...

Instance detected as a sandbox.

Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as L-IS-RATHISH

Instance configuration is suitable.

The instance 'root@localhost:3311' was successfully added to the cluster.
Configuring MySQL Router for High Availability:
MySQL Router routes client connections to servers in the cluster and it provides separate ports for Read and Read/Write operations.


MySQL Router takes its configuration from InnoDB Cluster’s metadata and configure itself by using –-bootstrap option. It is recommended to install MySQL Router on a separate server or can be installed on the application server.


The MySQL Router command is given below, this should be run on the server with Read/Write (R/W) role.


shell> mysqlrouter --bootstrap user@host:port


The server roles can be checked by using the status() method. Let us check the status of our cluster:


MySQL  localhost:3307 ssl  JS > cluster.status()

{

    "clusterName": "DBCluster",

    "defaultReplicaSet": {

        "name": "default",

        "primary": "localhost:3307",

        "ssl": "REQUIRED",

        "status": "OK",

        "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.",

        "topology": {

            "localhost:3307": {

                "address": "localhost:3307",

                "mode": "R/W",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "localhost:3308": {

                "address": "localhost:3308",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "localhost:3309": {

                "address": "localhost:3309",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "localhost:3310": {

                "address": "localhost:3310",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            },

            "localhost:3311": {

                "address": "localhost:3311",

                "mode": "R/O",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            }

        }

    },

    "groupInformationSourceMember": "mysql://root@localhost:3307"

}

 MySQL  localhost:3307 ssl  JS >


The server root@localhost:3307 is currently assigned with R/W role. Configure MySQL Router on this server:


C:\Windows\system32>mysqlrouter --bootstrap root@localhost:3307

Please enter MySQL password for root:

Reconfiguring system MySQL Router instance...

WARNING: router_id 1 not found in metadata

MySQL Router has now been configured for the InnoDB cluster 'DBCluster'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'DBCluster':

- Read/Write Connections: localhost:6446

- Read/Only Connections: localhost:6447

X protocol connections to cluster 'DBCluster':

- Read/Write Connections: localhost:64460

- Read/Only Connections: localhost:64470

Existing configurations backed up to 'C:/Program Files/MySQL/MySQL Router 8.0/mysqlrouter.conf.bak'


Connecting InnoDB Cluster:


From MySQL Router configuration, we get the connection information, by default, port 6446 used for Read /Write connections and Port 6447 used for Read/Only connections. MySQL Router allows to configure custom port numbers for R/W and R/O client connections.


Let us connect to first connect to Read/Write port and then connect to Read/Only port for testing.


Read/Write Instance:


C:\Users\rathish.kumar>mysql -u root -h localhost -P6446 -p

Enter password: *

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 176

Server version: 8.0.11 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> select @@port;

+--------+

| @@port |

+--------+

|   3307 |

+--------+

1 row in set (0.00 sec)

mysql> create database ClustDB;

Query OK, 1 row affected (0.09 sec)

mysql> use ClustDB;

Database changed

mysql> create table t1 (id int auto_increment primary key);

Query OK, 0 rows affected (0.18 sec)

mysql> insert into t1 (id) values(1);

Query OK, 1 row affected (0.06 sec)


Read/Only Instance:


C:\Users\rathish.kumar>mysql -u root -h localhost -P6447 -p

Enter password: *

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 47

Server version: 8.0.11 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> select @@port;

+--------+

| @@port |

+--------+

|   3308 |

+--------+

1 row in set (0.00 sec)

mysql> select * from ClustDB.t1;

+----+

| id |

+----+

|  1 |

+----+

1 row in set (0.00 sec)

mysql> insert into ClustDB.t1 (id) values (2);

ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement

mysql>


Testing High Availability:


We have connected to R/W and R/O instances, and it is working as expected. Now let’s test the High Availability by killing primary seed node (3307) and Read/Only instance (3308).


dba.killSandboxInstance(3307)

dba.killSandboxInstance(3308)


Sample output:


MySQL  localhost:3307 ssl  JS > dba.killSandboxInstance(3307);

The MySQL sandbox instance on this host in

C:\Users\rathish.kumar\MySQL\mysql-sandboxes\3307 will be killed

Killing MySQL instance...

Instance localhost:3307 successfully killed.


Now refresh run the query on the existing Read/Write and Read/Only connections and check the port:          


Read/Only Instance:


mysql> select @@port;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

Connection id:    38

Current database: *** NONE ***

+--------+

| @@port |

+--------+

|   3310 |

+--------+

1 row in set (1.30 sec)

mysql>


This error is due to connection rerouting while we are still connected to server. This error will not occur on new connections. Let us try with Read/Write connections:


Read/Write Instance:


C:\Users\rathish.kumar>mysql -u root -h localhost -P6446 -p

Enter password: *

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 32

Server version: 8.0.11 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> select @@port;

+--------+

| @@port |

+--------+

|   3311 |

+--------+

1 row in set (0.00 sec)

mysql>


There is no changes required from applications, the InnoDB Cluster will identify the changes and automatically configure itself and high availability achieved with the help of MySQL Router.


I suggest you to test InnoDB Cluster on lab environment and share your findings on comment section for other readers. I will be coming with other articles on working with InnoDB Cluster and Troubleshooting InnoDB Cluster. Need of any assistance on InnoDB Cluster, please share it on comment section.


Categories: Web Technologies

Presentation : MySQL Timeout Variables Explained

Planet MySQL - Fri, 05/18/2018 - 23:56

MySQL has multiple timeout variables these slides helps to give an overview of the different  timeout variables and their purposes briefly.

Categories: Web Technologies

Pages