emGee Software Solutions Custom Database Applications

Share this

Web Technologies

My oldest still-open MySQL bug

Planet MySQL - Thu, 08/30/2018 - 19:53

This morning I received an update on a MySQL bug, someone added a comment on an issue I filed in November 2003, originally for MySQL 4.0.12 and 4.1.0. It’s MySQL bug#1956 (note the very low number!), “Parser allows naming of PRIMARY KEY”:

[25 Nov 2003 16:23] Arjen Lentz Description: When specifying a PRIMARY KEY, the parser still accepts a name for the index even though the MySQL server will always name it PRIMARY. So, the parser should NOT accept this, otherwise a user can get into a confusing situation as his input is ignored/changed. How to repeat: CREATE TABLE pk (i INT NOT NULL); ALTER TABLE pk ADD PRIMARY KEY bla (i); 'bla' after PRIMARY KEY should not be accepted. Suggested fix: Fix grammar in parser.

Most likely we found it during a MySQL training session, training days have always been a great bug catching source as students would be encouraged to try lots of quirky things and explore.

It’s not a critical issue, but one from the “era of sloppiness” as I think we may now call it, with the benefit of hindsight.  At the time, it was just regarded as lenient: the parser would silently ignore things it couldn’t process in a particular context.  Like creating a foreign key on a table using an engine that didn’t support foreign keys would see the foreign keys silently disappear, rather than reporting an error.  Many  if not most of those quirks have been cleaned up over the years.  Some are very difficult to get rid of, as people use them in code and thus essentially we’re stuck with old bad behaviour as otherwise we’d break to many applications.  I think that one neat example of that is auto-casting:

SELECT "123 apples" + 1; +------------------+ | "123 apples" + 1 | +------------------+ | 124 | +------------------+ 1 row in set, 1 warning (0.000 sec) SHOW WARNINGS; +---------+------+-----------------------------------------------+ | Level | Code | Message | +---------+------+-----------------------------------------------+ | Warning | 1292 | Truncated incorrect DOUBLE value: '123 hello' | +---------+------+-----------------------------------------------+

At least that one chucks a warning now, which is good as it allows developers to catch mistakes and thus prevent trouble.  A lenient parser (or grammar) can be convenient, but it tends to also enable application developers to be sloppy, which is not really a good thing.  Something that creates a warning may be harmless, or an indication of a nasty mistake: the parser can’t tell.  So it’s best to ensure that code is clean, free even of warnings.

Going back to the named PRIMARY KEY issue… effectively, a PRIMARY KEY has the name ‘PRIMARY’ internally.  So it can’t really have another name, and the parser should not accept an attempt to name it.  The name silently disappears so when you check back in SHOW CREATE TABLE or SHOW INDEXES, you won’t ever see it.
CREATE TABLE pkx (i INT PRIMARY KEY x) reports a syntax error. So far so good.
But, CREATE TABLE pkx (i INT, PRIMARY KEY x (i)) is accepted, as is ALTER TABLE pkx ADD PRIMARY KEY x (i).

MySQL 8.0 still allows these constructs, as does MariaDB 10.3.9 !

The bug is somewhere in the parser, so that’s sql/sql_yacc.yy.  I’ve tweaked and added things in there before, but it’s been a while and particularly in parser grammar you have to be very careful that changes don’t have other side-effects.  I do think it’s worthwhile fixing even these minor issues.

Categories: Web Technologies

CSS Shape Editors

CSS-Tricks - Thu, 08/30/2018 - 19:39

Firefox 62 is shipping out of beta on September 5th. The big notable thing for CSS developers is that it will now support the shape-outside property with polygon(), circle(), and ellipse(), joining Chrome and Safari.

What will be nice about the Firefox release (well, it's kinda already nice if you use something like Firefox Developer Edition which is already on 62), is that it has a shape editor built right into DevTools.

Chrome supports shape-outside as well, but there is no native DevTools helper for working with them. Thankfully, Razvan Caliman has a Chrome Plugin that does a great job. (Razvan contributed to the Firefox version as well, I hear.)

I enjoy using shape-outside as it can add real visual interest to a page that isn't nearly overdone or trendy just yet. Plus, in a lot of cases, it doesn't matter whether it's supported because the float behaves as a rectangle. If it causes major issues, you can wrap things in an @supports block and do something different.

@supports (shape-outside: circle(50%)) { img { /* Only round the image if we can float around it too, otherwise leave it square. */ shape-outside: circle(50%); border-radius: 50%; } }

I do have a few gripes with both the Firefox DevTools and the Chrome plugin though...

  • I wish it was easier to add a new shape-outside to an existing element. You can do it, but you have to manually add something like shape-outside: polygon(0 0, 10px 0, 20px 0); or something to the element to kick off the tool, then start using it.
  • I wish they worked with % by default instead of px units.

That second one particularly. It's so common we size things flexibly these days that hard pixel values are sometimes useless and difficult to convert to flexible percentages.

You're probably better off starting with Bennett Feely's Clippy (it's technically for clip-path, but it includes polygon() it works works for either. It works with percentages, so great, moving on from there.

"The frustrations of using CSS Shapes and CSS Exclusions"

That's what Ben Frain recently blogged and it has some good points about all this. One key point is that using shape-outside doesn't necessarily mean that you're clipping the background. Personally, I find that I'm cutting a shape on backgrounds that are transparent anyway, but I take the point.

To fix this you'll need a clip-path as well.

The other big one is there is no shape-inside() property (yet), so if you're hoping to put some text inside a shape rather than have it wrap around the outside of a shape, no luck yet.

The post CSS Shape Editors appeared first on CSS-Tricks.

Categories: Web Technologies

The Ecological Impact of Browser Diversity

CSS-Tricks - Thu, 08/30/2018 - 07:01

Early in my career when I worked at agencies and later at Microsoft on Edge, I heard the same lament over and over: "Argh, why doesn’t Edge just run on Blink? Then I would have access to ALL THE APIs I want to use and would only have to test in one browser!"

Let me be clear: an Internet that runs only on Chrome’s engine, Blink, and its offspring, is not the paradise we like to imagine it to be.

As a Google Developer Expert who has worked on Microsoft Edge, with Firefox, and with the W3C as an Invited Expert, I have some opinions (and a number of facts) to drop on this topic. Let’s get to it.

What is a browser, even?

Let’s clear up some terminology.

Popular browsers you know today include Google Chrome, Apple Safari, Mozilla Firefox, and Microsoft Edge, but in the past we’ve also had such greats as NCSA Mosaic and Netscape Navigator. Whichever browser you use daily (I use Firefox, thanks for asking) is only an interface layer wrapped around a browser engine. All your bookmarks, the forward and backward arrows, that URL bar thingy—those aren’t the browser. Those are the browser’s interface. Often the people who build the browser’s engine never even touch the interface!

Browser engines are the things that actually read all the HTML and CSS and JavaScript that come down across the Internet, interpret them, and display a pretty picture of a web page for you. They have their own names. Chrome’s engine is Blink. Safari runs on WebKit. Firefox uses Gecko. Edge sits on top of EdgeHTML. (I like this naming convention. Good job, Edge.)

Except for Edge, all of these engines are open source, meaning anybody could grab one, wrap it in a new interface, and boom, release their own browser—maybe with a different (perhaps better) user experience—and that’s just what some browsers are! Oculus Browser, Brave, Vivaldi, Samsung Internet, Amazon’s Silk, and Opera all run on Blink. We call them “Chromium-based browsers”—Chromium is Google’s open source project from which Chrome and its engine emerged.

But what’s inside a browser engine? MOAR ENGINES! Every browser engine is comprised of several other engines:

  • A layout and rendering engine (often so tightly coupled that there’s no distinction) that calculates how the page should look and handles any paints, renders, and even animations.
  • A JavaScript engine, which is its own thing and can even run independently of the browser altogether. For instance, you can use Chrome’s V8 engine or Microsoft Edge’s Chakra to run Node on a server.

I like to compare browser engines to biological cells. Where a cell contains many organelles to perform different functions, so too do browsers. One can think of the nucleus as the rendering engine, containing the blueprints for how the page should display, and the mitochondria as the JavaScript engine, powering our everyday interactions. (Fun fact: mitochondria used to be standalone cells at one point, too, and they even carry their own DNA!)

And you know what? Another way browsers are like living things is that they evolve.

Browser evolution

Back when the first browsers came out, it was a simpler time. CSS was considered the hot new thing when it first appeared in Microsoft Internet Explorer 3 in 1996! There were far fewer JavaScript APIs and CSS specifications to implement than there are today. Over the years, browser codebases have grown to support the number of new features users and developers require to build modern web experiences. It is a delicate evolution between user needs, browser engineering effort, and specification standardization processes.

We have three major lineages of engines right now:

  • WebKit and Blink (Blink originally being a fork of WebKit) running Safari, Chrome, and Opera
  • Gecko running Firefox
  • EdgeHTML (a fork of Trident, aka MSHTML) running Microsoft Edge

Each is very different and has different strengths and weaknesses. Each could pull the Web in a different direction alone: Firefox’s engine has multithreaded processing for rendering blazing fast graphics. Edge’s engine has the least abstraction from the operating system, giving it more direct access to system resources—but making it a Windows-only browser engine as a result. And Chrome’s Blink has the most web developers testing for it. (I'll get back to why this is a "feature" in a little bit.)

Remember all those Chromium-based browsers we talked about? Well, none of these browsers have to build their rendering engine or JavaScript engines from scratch: they just piggy-back off of Blink. And if there are new features they need? They can develop those features and keep them to themselves, or they can share those features back “upstream” to become a part of the core engine for other browsers to use. (This process is often fraught with politics and logistics—"contributing back" is easier said than done!)

It is hard to imagine any one entity justifying the hours and expense it would take to spin up a browser engine from scratch today. Even the current three engine families are evolutions of engines that were there from the very start of the Internet. They’ve evolved piecemeal alongside us, growing to meet our needs.

Right now, the vast majority of traffic on the Web is happening on Chrome, iOS Safari, or some other permutation of Blink or WebKit.

Branches, renovations, and gut jobs

Some developers would say that WebKit and Blink were forked so long ago that the two are practically completely different browser engines now, no longer able to share contributions. That may be true to a point. But while a common chimney swift and a ruby-throated hummingbird are completely different animals in comparison to each other, when compared to the other animal families, they are still very much birds. Neither’s immediate offspring are likely to manifest teeth, hands, or tails in the near future. Neither WebKit nor Blink have the processing features that Gecko and EdgeHTML have been building up to for years.

Other developers might point out that Microsoft Edge is supposedly a "complete rewrite" of Internet Explorer. But the difference between a "complete gut job" and a mere "renovation" can be a matter of perspective. EdgeHTML is a fork of Internet Explorer's Trident engine, and it still carries much of Trident's backlog with it.

Browser extinction

So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it’s not likely to be replaced.

Imagine a planet populated only by hummingbirds, dolphins, and horses. Say all the dolphins died out. In the far, far future, hummingbirds or horses could evolve into something that could swim in the ocean like a dolphin. Indeed, ichthyosaurs in the era of dinosaurs looked much like dolphins. But that creature would be very different from a true dolphin: even ichthyosaurs never developed echolocation. We would wait a very long time (possibly forever) for a bloodline to evolve the traits we already have present in other bloodlines today. So, why is it ok to stand by or even encourage the extinction of one of these valuable, unique lineages?

We have already lost one.

We used to have four major rendering engines, but Opera halted development of its own rendering engine Presto before adopting Blink.

Three left. Spend them wisely.

By our powers combined...

Some folks feel that if a browser like Microsoft Edge ran on Blink, then Microsoft's engineers could help build a better Blink, contributing new features upstream for all other Chromium browsers to benefit from. This sounds sensible, right?

But remember Blink was forked from WebKit. Now there are WebKit contributors and Blink contributors, and their contributions don't port one to one. It's not unlikely that a company like Microsoft would, much like Samsung and Occulus, want to do things with the engine differently from Google. And if those differences are not aligned, the company will work on a parallel codebase and not contribute that work upstream. We end up with a WebKit and a Blink—but without the deep differentiation that comes from having a codebase that has been growing with the Internet for decades.

It’s is a nice idea in theory. But in practice, we end up in a similar pickle—and with less "genetic variety" in our ecosystem of browser engines.

Competition is about growing, not "winning"

I'm a fan of competition. Competing with other cartoonists to make better comics and reach a larger audience got me to where I am today. (True story: got my start building websites as a cartoonist building her own community site, newsletter, and shopping cart.) I like to remind people that competition isn't about annihilating your competitors. If you did, you'd stagnate and lose your audience: see also Internet Explorer 6.

Internet Explorer 6 was an amazing browser when it came out: performant enough to really deliver on features previous versions of Internet Explorer introduced like the DOM, data-binding, and asynchronous JavaScript. It's rival, Netscape Navigator, couldn't compete and crumbled into dust (only to have its engine, Gecko, rewritten from scratch—it was that small!—by the Mozilla Foundation to be reborn as Firefox later).

Thinking it had Won the Internet, Microsoft turned its attention to other fronts, and Internet Explorer didn't advance much. When the iPhone came, Apple focused on its profitable apps marketplace and cut efforts to support Flash—the Web’s most app-like interaction platform. Apps gave content creators a way to monetize their efforts with something other than an advertising model. Advertising being Google's bread and butter, the Big G grew concerned as the threat of a walled garden of apps only using the Internet for data plumbing loomed. Microsoft, meanwhile, was preoccupied with building its own mobile OS. So Google did two things: Android and Chrome.

Chrome promised a better, faster browsing experience. It was barebones, but Google even went all out and got famous (in my circles at least) cartoonist Scott McCloud to make a comic explaining the browser's mission to users. With Chrome's omnipresence on every operating system and Android phone, its dev tools modeled off Firefox's beloved Firebug extension, and increasing involvement in specs, Chrome not only shook Internet Explorer out of its slumber, it was damn near threatening to kill off every other browser engine on the planet!

Pruning the great family tree of browsers (or collection of bushes as it were) down to a single branch smacks of fragile monoculture. Monocultures are easily disrupted by environmental and ecological challenges—by market and demographic changes. What happens when the next threat to the Web rears its head, but we don't have Firefox's multithreading? Or Microsoft Edge's system integration? Will we be able to iterate fast enough without them? Or will we look to the Chrome developers to do something and pray that they have not grown stagnant as Google turned, like Microsoft did, to attend to other matters after "winning the Web."

It is ironic that the browser Google built to keep the Web from losing to the apps model is itself monopolizing web development much the same way Internet Explorer 6 did.

It's good to be king (of the jungle)

I promised I'd get back to "user base size as a feature." Having the vast majority of the web development community building and testing for your platform is a major competitive advantage. First off, you're guaranteed that most sites are going to work perfectly in your browser—you won't have to spend so much time and effort tapping Fortune 500 sites on the shoulder about this One Weird Bug that is causing all their internal users to switch to your competitor browser and never come back. A downward spiral of fewer users leading to less developer testing leading to fewer users begins that is hard to shake.

It also makes it much easier to propose new specifications that serve your parent company's goals (which may or may not serve the web community's goals) and have that large community of developers build to your implementation first without having to wait for other browsers to catch up. If a smaller browser proposes a spec that no one notices and you pick it up when you need it, people will remember it as being your effort, continuing to build your mindshare whether intentionally or not.

This creates downward pressure on the competition, which simply doesn’t have or cannot use the same resources the biggest browser team has at its disposal. It’s a brutally efficient method, whether by design or accident.

Is it virtuous? At the individual contributor level, yes. Is it a vicious cycle by which companies have driven their competition to extinction by forcing them to spend limited resources to catch up? Also yes. And having legions of people building for just your platform helps.

All the awesome, well-meaning, genuinely good-hearted people involved—from the Chrome team to web developers—can throw their hands up and legitimately say, “I was just trying to build the Web forward!” while contributing to further a corporate monopoly.

Chrome has the most resources and leads the pack in building the Web forward to the point that we can’t be sure if we’re building the Web we want... or the Web Google wants.

Speculative biology

There was a time when Microsoft bailed out Apple as it was about to sink. This was not because Bill Gates and Steve Jobs were friends—no, Microsoft needed Apple to succeed so there would still be operating system competition. (No business wants to be seen as a monopoly!)

But consider, for a moment, that Apple had died. What would personal computers be like today if we’d only had Linux and Windows left standing? What would mobile computing look like if Apple hadn't been around to work on the iPhone?

Yes, it's easier to develop and test in only one browser. I'm sure IT professionals would have loved to only support one kind of machine. But variety creates opportunity for us as developers in the long run. Microsoft saving Apple lead to apps which challenged the Web which gave us Chrome and the myriad of APIs Google is charging ahead with. If at any point in this chain of events someone had said, "Meh, it's so much easier if we all use the same thing," we wouldn't have the careers—or the world—that we have now.

Develop in more than one browser. Test in more than one browser. Use more than one browser.

You are both consumer and producer. You have a say in how the future plays out.

The post The Ecological Impact of Browser Diversity appeared first on CSS-Tricks.

Categories: Web Technologies

​The Ultimate Guide to Headless CMS

CSS-Tricks - Thu, 08/30/2018 - 06:58

(This is a sponsored post.)

Struggling to engage your customers with seamless omnichannel digital experiences?

Then headless CMS is the technology you’ve been waiting for. But with all the buzz around this new technology, you might be feeling a bit lost.

Download our free headless CMS guide and get all the information you need to understand headless CMS architecture and multichannel content management, learn how to future-proof your content against any upcoming technology, and see the benefits of being programming-language agnostic.

Grab your complimentary The Ultimate Guide to Headless CMS eBook.

Direct Link to ArticlePermalink

The post ​The Ultimate Guide to Headless CMS appeared first on CSS-Tricks.

Categories: Web Technologies

Is It a Read Intensive or a Write Intensive Workload?

Planet MySQL - Thu, 08/30/2018 - 04:19

One of the common ways to classify database workloads is whether it is  “read intensive” or “write intensive”. In other words, whether the workload is dominated by reads or writes.

Why should you care? Because recognizing if the workload is read intensive or write intensive will impact your hardware choices, database configuration as well as what techniques you can apply for performance optimization and scalability.

This question looks trivial on the surface, but as you go deeper—complexity emerges. There are different “levels” of reads and writes for you to consider. You can also choose to look at event counts or at the time it takes to do operations. These can provide very different responses, especially as the cost difference between a single read and a single write can be an order of magnitude.

Let’s examine the TPC-C Benchmark from this point of view, or more specifically its implementation in Sysbench. The illustrations below are taken from Percona Monitoring and Management (PMM) while running this benchmark.

Analyzing read/write workload by counts


At the highest level, you can think about queries that are sent to the database. In this case we can see about 30K of SELECT queries versus 20K of UPDATE+INSERT queries, making this benchmark slightly more read intensive by this measure.


Another way to look at the load is through actual operations at the row level – a single query may touch just one row or may touch millions. In this benchmark the difference between looking at workload from a SQL commands standpoint vs a row operation standpoint yields the same results, but it is not going to always be the case.


Let’s now look at the operating system level. We can see the amount of data written to the disk is 2x more than the amount of data being read from the disk. This workload is write intensive by this measure.

Yet another way to take a look at your workload is to take a look at it from the aspect of tables. This view shows us that tables are being mostly accessed for reads and writes. This in turn allows us to see whether a given table is getting more reads or writes. This is helpful, for example, if you are considering to move some of the tables to a different server and want to clearly understand how your workload will be impacted.

Analyzing Read/Write Workload by Response Time

As I mentioned already, the counts often do not reflect the time to respond, which is typically more representative of the real work being done. To look at timing information from query point of view, we want to look at query analytics.


The “Load” column here is a measure of such a combined response time, versus count which is reflective of query counts. Looking at this list we can see that three out of top five queries are SELECT queries. Looking at the numbers overall, we can see we have a read intensive application from this perspective.

In terms of row level operations, there is currently no easy way to see if reads or writes are dominating overall but  you can get an idea from the table operations dashboard:


This shows the load on a per table basis. It labels reads “Fetch” and breaks down writes in more detail—“Update”, “Delete”, “Inserts”—which is helpful. Not all writes are equal either.

If we want to look at a response time based view of read vs write on an operating system, we can check out this disk IO Load graph. You can see in this case it happens to match the IO activity graph, with storage taking more time to serve write requests versus read requests

Summary

As you can see, the question about whether a workload is read intensive or write intensive, while simple on the surface, can have many different answers. You might ask me “OK, so what should I use?” Well… it really depends.

Looking at query counts is a great way to understand the application’s demands on the database—you can’t really do anything to change the database size.  However by changing the database configuration and schema you may drastically alter the impact of these queries, both from the standpoint of the number of rows they crunch and in terms of the disk IO they require.

The response time based statistics, gathered from the impact your queries cause on the system or disk IO, provide a better representation of the load these queries currently generate.

Another thing to keep in mind—reads and writes are not created equal. My rule of thumb for InnoDB is that a single row write is about 10x more expensive than a single row read.

More resources that you might enjoy

If you found this post useful, you might also like to see some of Percona’s other resources.

For an introduction to PMM, our free and open source management and monitoring software, you might find value in my recorded webinar, MySQL Troubleshooting and Performance Optimization with PMM

While our white paper Performance at Scale could provide useful insight if you are at the planning or review stage.

The post Is It a Read Intensive or a Write Intensive Workload? appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Installation and configuration of Percona XtraDB Cluster on CentOS 7.3

Planet MySQL - Wed, 08/29/2018 - 23:11

This blog will show how to install the Percona XtraDB Cluster on three CentOS 7.3 servers, using the packages from Percona repositories. This is a step-by-step installation and configuration blog, We recommend Percona XtraDB Cluster for maximum availability / reliability and scale-out READ/WRITE optimally. We are an private-label independent and vendor neutral consulting, support, managed services and education solutions provider for MySQL, MariaDB, Percona Server and ClickHouse with core expertise in performance, scalability, high availability and database reliability engineering. All our blog posts are purely focussed on education and research across open source database systems infrastructure operations. To engage us for building and managing web-scale database infrastructure operations, Please contact us on contact@minervadb.com

This cluster will be assembled of three servers/nodes:

node #1 hostname: PXC1 IP: 138.197.70.35 node #2 hostname: PXC2 IP: 159.203.118.230 node #3 hostname: PXC3 IP: 138.197.8.226

Prerequisites

  • All three nodes have a CentOS 7.3 installation.
  • Firewall has been set up to allow connecting to ports 3306, 4444, 4567 and 4568
  • SELinux is disabled
Installing from Percona Repository on 138.197.70.35
  • Install the Percona repository package:

$ sudo yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm

  • You should see the following if successful:

Installed: percona-release.noarch 0:0.1-4 Complete!

  • Check that the packages are available:

$ sudo yum list | grep Percona-XtraDB-Cluster-57 Percona-XtraDB-Cluster-57.x86_64 5.7.14-26.17.1.el7 percona-release-x86_64 Percona-XtraDB-Cluster-57-debuginfo.x86_64 5.7.14-26.17.1.el7 percona-release-x86_64

  • Install the Percona XtraDB Cluster packages:

$ sudo yum install Percona-XtraDB-Cluster-57

  • Start the Percona XtraDB Cluster server:

$ sudo service mysql start

  • Copy the automatically generated temporary password for the superuser account:

$ sudo grep 'temporary password' /var/log/mysqld.log

  • Use this password to login as root:

$ mysql -u root -p

  • Change the password for the superuser account and log out. For example:

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'root'; Query OK, 0 rows affected (0.00 sec) mysql> exit Bye

  • Stop the mysql service:

$ sudo service mysql stop

Repeat the same Percona XtraDB Cluster installation process for 159.203.118.230 and 138.197.8.226 Configuring nodes

We have to configure separately the nodes 138.197.70.35, 159.203.118.230 and 138.197.8.226 for successfully implementing an fully operational Percona XtraDB Cluster ecosystem.

Configuring the node 138.197.70.35

Configuration file /etc/my.cnf for the first node should look like:

[mysqld] datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://138.197.70.35,159.203.118.230,138.197.8.226 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node #1 address wsrep_node_address=138.197.70.35 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=pxc_cluster # Authentication for SST method wsrep_sst_auth="sstuser:sstuser"

The first node can be started with the following command:

# /etc/init.d/mysql bootstrap-pxc

We are using CentOS 7.3 so systemd bootstrap service should be used:

# systemctl start mysql@bootstrap.service

This command will start the cluster with initial wsrep_cluster_address set to gcomm://. This way the cluster will be bootstrapped and in case the node or MySQL have to be restarted later, there would be no need to change the configuration file.

After the first node has been started, cluster status can be checked by:

mysql> show status like 'wsrep%'; +------------------------------+------------------------------------------------------------+ | Variable_name | Value | +------------------------------+------------------------------------------------------------+ | wsrep_local_state_uuid | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5 | | wsrep_protocol_version | 7 | | wsrep_last_committed | 8 | | wsrep_replicated | 4 | | wsrep_replicated_bytes | 906 | | wsrep_repl_keys | 4 | | wsrep_repl_keys_bytes | 124 | | wsrep_repl_data_bytes | 526 | | wsrep_repl_other_bytes | 0 | | wsrep_received | 9 | | wsrep_received_bytes | 1181 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_max | 1 | | wsrep_local_send_queue_min | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_max | 2 | | wsrep_local_recv_queue_min | 0 | | wsrep_local_recv_queue_avg | 0.111111 | | wsrep_local_cached_downto | 3 | | wsrep_flow_control_paused_ns | 0 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_flow_control_interval | [ 28, 28 ] | | wsrep_cert_deps_distance | 1.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 1.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 1.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 2 | | wsrep_cert_bucket_count | 22 | | wsrep_gcache_pool_size | 3128 | | wsrep_causal_reads | 0 | | wsrep_cert_interval | 0.000000 | | wsrep_incoming_addresses | 159.203.118.230:3306,138.197.8.226:3306,138.197.70.35:3306 | | wsrep_desync_count | 0 | | wsrep_evs_delayed | | | wsrep_evs_evict_list | | | wsrep_evs_repl_latency | 0/0/0/0/0 | | wsrep_evs_state | OPERATIONAL | | wsrep_gcomm_uuid | b79d90df-1077-11e7-9922-3a1b217f7371 | | wsrep_cluster_conf_id | 3 | | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 2 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 3.20(r7e383f7) | | wsrep_ready | ON | +------------------------------+------------------------------------------------------------+ 60 rows in set (0.01 sec)

This output above shows that the cluster has been successfully bootstrapped.

In order to perform successful State Snapshot Transfer using XtraBackup new user needs to be set up with proper privileges:

mysql@PXC1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'sstuser'; mysql@PXC1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; mysql@PXC1> FLUSH PRIVILEGES;

Configuration file /etc/my.cnf on the second node (PXC2) should look like this:

[mysqld] datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://138.197.70.35,159.203.118.230,138.197.8.226 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node #2 address wsrep_node_address=159.203.118.230 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=pxc_cluster # Authentication for SST method wsrep_sst_auth="sstuser:sstuser"

Second node can be started with the following command:

# systemctl start mysql

Cluster status can now be checked on both nodes. This is the example from the second node (PXC2):

mysql> show status like 'wsrep%'; +------------------------------+------------------------------------------------------------+ | Variable_name | Value | +------------------------------+------------------------------------------------------------+ | wsrep_local_state_uuid | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5 | | wsrep_protocol_version | 7 | | wsrep_last_committed | 8 | | wsrep_replicated | 0 | | wsrep_replicated_bytes | 0 | | wsrep_repl_keys | 0 | | wsrep_repl_keys_bytes | 0 | | wsrep_repl_data_bytes | 0 | | wsrep_repl_other_bytes | 0 | | wsrep_received | 10 | | wsrep_received_bytes | 1238 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_max | 1 | | wsrep_local_send_queue_min | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_max | 1 | | wsrep_local_recv_queue_min | 0 | | wsrep_local_recv_queue_avg | 0.000000 | | wsrep_local_cached_downto | 6 | | wsrep_flow_control_paused_ns | 0 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_flow_control_interval | [ 28, 28 ] | | wsrep_cert_deps_distance | 1.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 1.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 1.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 2 | | wsrep_cert_bucket_count | 22 | | wsrep_gcache_pool_size | 2300 | | wsrep_causal_reads | 0 | | wsrep_cert_interval | 0.000000 | | wsrep_incoming_addresses | 159.203.118.230:3306,138.197.8.226:3306,138.197.70.35:3306 | | wsrep_desync_count | 0 | | wsrep_evs_delayed | | | wsrep_evs_evict_list | | | wsrep_evs_repl_latency | 0/0/0/0/0 | | wsrep_evs_state | OPERATIONAL | | wsrep_gcomm_uuid | 248e2782-1078-11e7-a269-4a3ec033a606 | | wsrep_cluster_conf_id | 3 | | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 3.20(r7e383f7) | | wsrep_ready | ON | +------------------------------+------------------------------------------------------------+ 60 rows in set (0.00 sec)

This output shows that the new node has been successfully added to the cluster.

MySQL configuration file /etc/my.cnf on the third node (PXC3) should look like this:

[mysqld] datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://138.197.70.35,159.203.118.230,138.197.8.226 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node #3 address wsrep_node_address=138.197.8.226 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=pxc_cluster # Authentication for SST method wsrep_sst_auth="sstuser:sstuser"

Third node can now be started with the following command:

# systemctl start mysql

Percona XtraDB Cluster status can now be checked from the third node (PXC3):

mysql> show status like 'wsrep%'; +------------------------------+------------------------------------------------------------+ | Variable_name | Value | +------------------------------+------------------------------------------------------------+ | wsrep_local_state_uuid | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5 | | wsrep_protocol_version | 7 | | wsrep_last_committed | 8 | | wsrep_replicated | 2 | | wsrep_replicated_bytes | 396 | | wsrep_repl_keys | 2 | | wsrep_repl_keys_bytes | 62 | | wsrep_repl_data_bytes | 206 | | wsrep_repl_other_bytes | 0 | | wsrep_received | 4 | | wsrep_received_bytes | 529 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_max | 1 | | wsrep_local_send_queue_min | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_max | 1 | | wsrep_local_recv_queue_min | 0 | | wsrep_local_recv_queue_avg | 0.000000 | | wsrep_local_cached_downto | 6 | | wsrep_flow_control_paused_ns | 0 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_flow_control_interval | [ 28, 28 ] | | wsrep_cert_deps_distance | 1.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 1.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 1.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 2 | | wsrep_cert_bucket_count | 22 | | wsrep_gcache_pool_size | 2166 | | wsrep_causal_reads | 0 | | wsrep_cert_interval | 0.000000 | | wsrep_incoming_addresses | 159.203.118.230:3306,138.197.8.226:3306,138.197.70.35:3306 | | wsrep_desync_count | 0 | | wsrep_evs_delayed | | | wsrep_evs_evict_list | | | wsrep_evs_repl_latency | 0/0/0/0/0 | | wsrep_evs_state | OPERATIONAL | | wsrep_gcomm_uuid | 3f51b20e-1078-11e7-8405-8e9b37a37cb1 | | wsrep_cluster_conf_id | 3 | | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | 5ea977b8-0fc0-11e7-8f73-26f60f083bd5 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 1 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 3.20(r7e383f7) | | wsrep_ready | ON | +------------------------------+------------------------------------------------------------+ 60 rows in set (0.03 sec)

This output confirms that the third node has joined the cluster.

Testing Replication

Creating the new database on the PXC1 node:

mysql> create database minervadb; Query OK, 1 row affected (0.01 sec)

Creating the example table on the PXC2 node:

mysql> use minervadb; Database changed mysql> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30)); Query OK, 0 rows affected (0.01 sec)

Inserting records on the PXC3 node:

mysql> INSERT INTO minervadb.example VALUES (1, 'MinervaDB'); Query OK, 1 row affected (0.07 sec)

Retrieving all the rows from that table on the PXC1 node:

mysql> select * from minervadb.example; +---------+-----------+ | node_id | node_name | +---------+-----------+ | 1 | MinervaDB | +---------+-----------+ 1 row in set (0.00 sec)

 

The post Installation and configuration of Percona XtraDB Cluster on CentOS 7.3 appeared first on MySQL Consulting, Support and Remote DBA Services.

Categories: Web Technologies

TypeScript Quick Start Guide

Echo JS - Wed, 08/29/2018 - 18:43
Categories: Web Technologies

The Butterfly.js Effect

Echo JS - Wed, 08/29/2018 - 18:43
Categories: Web Technologies

So What’s New in Babel 7?

Echo JS - Wed, 08/29/2018 - 18:43
Categories: Web Technologies

A quick guide to Angular scopes

Echo JS - Wed, 08/29/2018 - 18:43
Categories: Web Technologies

React Native Tabbar Interaction

Echo JS - Wed, 08/29/2018 - 18:43
Categories: Web Technologies

Pages