emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Five of My Favorite Features of Jetpack

CSS-Tricks - Tue, 05/15/2018 - 09:37

Jetpack is an official WordPress plugin directly from Automattic. It's an interesting plugin in that it doesn't just do *one thing* — it does a whole slew of things that enhance what your WordPress site can do. *Any* WordPress site, that is, and often with extremely little effort. Those easy win features Jesse Friedman calls light switch features, meaning you literally flip a switch in Jetpack's settings and start benefitting. I love that.

There are dozens of features in Jetpack, and I personally make use of most of them and see the benefit in all of them. Allow me to share with you five of my favorites and how they are actively used right here on this site. It's actually a bit hard to pick, so perhaps I'll do this again sometime!

1) Related Posts

This seems like such a simple little feature, but it's anything but. Something has to analyze all the content on your site and figure out what is most related to the current content. That kind of thing can be incredibly database intensive and bring even powerful hosting to its knees. WP Engine, by all accounts a powerful and WordPress-tuned host, bans many of them outright:

Almost all “Related Posts” plugins suffer from the same fundamental problems regarding MySQL, indexing, and search. These problems make the plugins themselves extremely database intensive.

Their top recommendation? Jetpack Related Posts.

Jetpack handles this by offloading the work off your servers and onto theirs. Your site has no additional load for this super useful feature. I find it does a great job.

In a recent post about dropdown menus, here are the related posts it displays. 2) Markdown

I wrote once: You Should Probably Blog in Markdown.

I'm quite serious about that. I've been involved with far too many sites where old content was mired with old crufty HTML or otherwise formatted in a way that held them back, and cleaning up that content was an impractically large job. You avoid that situation entirely if you create your content in Markdown (and stick to Markdown syntax).

Things like <span style="font-weight: bold; font-family: Georgia; color: red;"> around seemingly random sentences. <div class="content-wrap-wrap"> wrapping every single blog post because it was just “what you did”. Images force-floated to the right because that made sense three designs ago. Headers with class names on them that don’t even exist anymore. Tables of data spit into content with all the whitespace stripped out and weird alignment attributes you barely recognize. An about page that was pasted in from Microsoft Word.

Content like this will not and cannot last.

With Jetpack, you flip on Markdown support and you're good!

Almost more importantly, it feels like the Markdown option you can trust. There are other plugins out there that add Markdown support, and that's great, but they leave open questions. How long is the plugin going to be supported? What variety of Markdown did they pick? Is that variety going to continue to be supported? Does it work in comments? These kind of questions make me nervous.

Choosing Jetpack feels safe because it's the official Markdown choice of WordPress, in a plugin we can be sure will be updated forever.

It'll be interesting to see what Markdown + Gutenberg is like!

3) Social Sharing

This is a multi-pronged feature.

One, it can easily add sharing buttons to your posts. This isn't a particularly difficult thing to do for any developer worth their salt, but having it abstracted away in a plugin is nice.

Two, it can share your posts directly to social sites like Twitter and Facebook. That's a much harder thing to write from scratch. It's a thing I used a paid third-party service for years. It was a pleasant surprise when I discovered Jetpack's ability to do this.

The "official" buttons can be nice as they have the most integrated functionality with the services themselves, but I dig that the other options are like "Responsible Social Sharing Links" in that they have the least possible performance overhead.

It allows you to customize the mesasge, and does smart stuff like includes the Featured Image as image attached to the social post.

4) Comment Improvements

I'm a little uncomfortable using a third-party comment system on my sites. I like control. Comments are content and I like making sure those are in my own database. Yet, I think users have a little higher expectations of comment thread UX than they used to. Having to manually type in your name and email and all that feels kinda old school in a not-so-great way.

Fortunately, Jetpack can replace your comment form with a modernized version:

Now users can log in with their social accounts, making leaving a comment a much easier flow. Not to mention the fact that it's all the less work you have to do styling the comment form.

Notice the "Notify me of new comments via email" checkbox there too. Guess what powers that? Jetpack. That's a nice feature for people who are leaving question-style comments on your site. They very likely want to be notified of about the conversations happening in comments as it evolves.

5) Security

I've been deep into the world of web development for many years. The more I know, the more I can see that I don't know. We all have to focus on certain things to get done what we need to on our own journeys. I fully admit I know very little about server-side web security. I'd rather leave that exertise to others, or ideally, software that I trust.

Here's some fun statistics from here on CSS-Tricks:

It's nice to know my site is being protected that way from malicious logins. Spam, an even more direct problem, is also taken care of by Akismet, the spam-blocking plugin that my Jetpack subscription covers.

Should anything happen to the site, I know it's backed up off my server by VaultPress, which is also part of my Jetpack subscription.

See how much it does!?

Again, this is just a handful of the features of Jetpack. There are so many niceties tucked into it I consider it a no-brainer plugin. Probably the first and most important plugin you'll install on any self-hosted WordPress site.

The post Five of My Favorite Features of Jetpack appeared first on CSS-Tricks.

Categories: Web Technologies

It All Started With Emoji: Color Typography on the Web

CSS-Tricks - Tue, 05/15/2018 - 07:02

“Typography on the web is in single color: characters are either black or red, never black and red …Then emoji hit the scene, became part of Unicode, and therefore could be expressed by characters — or “glyphs” in font terminology. The smiley, levitating businessman and the infamous pile of poo became true siblings to letters, numbers and punctuation marks.”

Roel Nieskens

Using emojis in code is easy. Head over to emojipedia and copy and paste one in.


Or in CSS:

And JavaScript, too:

(Alternatively, you can specify emoji with a Unicode codepoint.)

However, you might run into a problem...

Lost in translation: Emoji’s consistency problem

The diversity of emoji across platforms might not sound like a major problem. However, these sometimes radical inconsistencies leave room for drastic miscommunication. Infamously, the “grinning face with smiling eyes” emoji ends up as a pained grimace on older Apple systems.

This was such a big deal that even The Washington Post covered it.

A harmless and playful watergun emoji might show up as a deadly firearm.

Courtesy of Emojipedia.

And who knows how many romances were dashed by Google’s utterly bizarre hairy heart emoji?

&#x1f92e;This has since been rectified by Google

Unicode standardizes what each emoji should represent with a terse description but the visual design is down to the various platforms.

Color fonts to the rescue!

The solution? Use an emoji font. Adobe has released a font called EmojiOne and Twitter open-sourced Twemoji. More are likely to follow.

@font-face { font-family: 'emoji'; src: url('emojione-svg.woff2') format('woff2'); }

If a user types into an HTML input or textarea, they will see your fancy custom emoji. ❤️

An input in Firefox.

Emoji fonts also have the benefit of avoiding the pixelation seen in scaled-up raster emoji. If you happen to want really large emoji, an SVG-in-Opentype font is clearly the superior choice.

On the left, a standard Apple dogface emoji looking pixelated. On the right, smooth SVG-in-Opentype emoji characters from Emojione and Twemoji, respectively. Unicode (right) clearly doesn’t specify a breed! Browser support

Confusingly, color fonts aren’t one standard but rather four &#x1f644;. OpenType is the font format used on the web. When emoji were added to unicode, the big players realized that multi-color support needed to be added to OpenType in some way. Different companies came up with a diversity of solutions. The fonts are still .ttf, .woff or .woff2 — but internally they’re a bit different. I pieced together this support table using a tool called Chromacheck:

Chrome Safari Edge Firefox SVG-in-Opentype ❌ ❌ ✅ ✅ COLR/CPAL ✅ ✅ ✅ ✅ SBIX ✅ ✅ ✅ ❌ CBDT/CBLC ✅ ❌ ✅ ❌

We’ve learned why color fonts were invented. But it’s not all about emoji...

Multicoloured alphabets Gilbert font

Color fonts are a new technology, so you won’t find that many typefaces to choose from as of yet. If you want to try one out that’s free and open source, Bungee by David Jonathan Ross is a great choice.

While some fonts provide full emoji support and others offer a multicolor alphabet, Ten Mincho — a commercial font from Adobe — takes a different tack. In the words of its marketing material, the font holds “a little surprise tucked away in the glyphs.” Of the 2,666 emoji in the Unicode Standard, Ten Mincho offers a very limited range in a distinctive Japanese style.

The adorable custom emoji set of Ten Mincho

Emoji have become a predominant mode of human communication. Over 60 million emoji are used on Facebook every single day. On Messenger, the number is even more astonishing, at five billion per day. If you’re building any sort of messaging app, getting emoji right really matters.

The post It All Started With Emoji: Color Typography on the Web appeared first on CSS-Tricks.

Categories: Web Technologies

Free Introduction to Web Development Workshop

CSS-Tricks - Tue, 05/15/2018 - 06:54

Brian Holt and the Frontend Masters team are putting on a free workshop today and tomorrow that is live-streamed for anyone that's interested. This is super cool because, despite the fact that there is a mountain of articles about web development out there, there are only few that start with the basics in a manner that's easy for beginners to follow. All of the materials are open source and available here as well.

I've been a fan of Brian's work for ages now, which is part of the reason why I advocated for him to join my team, and now I have the honor of working with him. I find his style of teaching really calming, which is encouraging when the subject forays into complex concepts. The livestream is free today (5/15) and tomorrow (5/16), but will also be available afterwards if you happen to miss it.

Direct Link to ArticlePermalink

The post Free Introduction to Web Development Workshop appeared first on CSS-Tricks.

Categories: Web Technologies

MySQL User Camp, Bangalore – 27th April, 2018

Planet MySQL - Tue, 05/15/2018 - 05:32
MySQL User Camp is a forum where MySQL Engineers and community users come together to connect, collaborate, and share knowledge. This year’s first MySQL User Camp was held on 27th April 2018, at Oracle India Pvt Ltd, Kalyani Magnum Infotech Park, Bangalore with an excellent turnout of 60 attendees. The event began with a welcome […]
Categories: Web Technologies

MySQL 8.0: InnoDB Introduces LOB Index For Faster Updates

MySQL Server Blog - Tue, 05/15/2018 - 04:49

To support the new feature Partial Update of JSON documents, InnoDB changed the way it stored the large objects (LOBs) in MySQL 8.0. This is because InnoDB does not have a separate JSON data type and stores JSON documents as large objects.…

Categories: Web Technologies

MySQL 8.0: InnoDB Introduces LOB Index For Faster Updates

Planet MySQL - Tue, 05/15/2018 - 04:49

To support the new feature Partial Update of JSON documents, InnoDB changed the way it stored the large objects (LOBs) in MySQL 8.0. This is because InnoDB does not have a separate JSON data type and stores JSON documents as large objects.…

Categories: Web Technologies

PHP Versions Stats - 2018.1 Edition - Jordi Boggiano

Planet PHP - Tue, 05/15/2018 - 01:00

It's stats o'clock! See 2014, 2015, 2016.1, 2016.2, 2017.1 and 2017.2 for previous similar posts.

A quick note on methodology, because all these stats are imperfect as they just sample some subset of the PHP user base. I look in the packagist.org logs of the last month for Composer installs done by someone. Composer sends the PHP version it is running with in its User-Agent header, so I can use that to see which PHP versions people are using Composer with.

PHP usage statistics May 2018 (+/- diff from November 2017)

All versions Grouped PHP 7.2.4 7.54% PHP 7.1 35.02% (-1.61) PHP 7.1.16 7.41% PHP 7.0 23.02% (-7.74) PHP 7.0.28 5.54% PHP 7.2 20.18% (+20.18) PHP 7.1.15 4.11% PHP 5.6 16.48% (-6.8) PHP 7.2.3 3.85% PHP 5.5 3.50% (-2.61) PHP 7.1.14 3.79% PHP 5.4 1.04% (-0.47)

A few observations: PHP 7.1 is still on top but 7.2 is closing real quick with already 1/5th of users having upgraded. That's the biggest growth rate for a newly released version since I have started collecting those stats. Ubuntu 18.04 LTS ships with 7.2 so this number will likely grow even more in the coming months. 78% of people used PHP 7+ and almost 95% were using a PHP version that is still maintained, it sounds too good to be true. PHP 5.6 and 7.0 will reach end of life by late 2018 though so that's 40% of users who are in need of an upgrade if we want to keep these numbers up!

Here is the aggregate chart covering all my blog posts and the last five years.

PHP requirements in Packages

The second dataset is which versions are required by the PHP packages present on packagist. I only check the require statement in their current master version to see what the latest requirement is, and the dataset only includes packages that had commits in the last year to exclude all EOL'd projects as they don't update their requirements.

PHP Requirements - Recent Master - May 2018 (+/- diff from Recent Master November 2017)

5.21.16% (-0.12) 5.315.9% (-2.85) 5.416.59% (-3.7) 5.515.52% (-3.55) 5.619.57% (-0.83) 7.019.47% (4.62) 7.111.15% (5.83) 7.20.64% (0.61)

This is as usual lagging behind a little but PHP 7 is finally seeing some real adoption in the OSS world which is nice.

Categories: Web Technologies

Book review: Fifty quick ideas to improve your tests - Part 1 - Matthias Noback

Planet PHP - Tue, 05/15/2018 - 00:11

After reading "Discovery - Explore behaviour using examples" by Gáspár Nagy and Seb Rose, I picked up another book, which I bought a long time ago: "Fifty Quick Ideas to Improve Your Tests" by Gojko Adzic, David Evans, Tom Roden and Nikola Korac. Like with so many books, I find there's often a "right" time for them. When I tried to read this book for the first time, I was totally not interested and soon stopped trying to read it. But ever since Julien Janvier asked me if I knew any good resources on how to write good acceptance test scenarios, I kept looking around for more valuable pointers, and so I revisited this book too. After all, one of the author's of this book - Gojko Adzic - also wrote "Bridging the communication gap - Specification by example and agile acceptance testing", which made a lasting impression on me. If I remember correctly, the latter doesn't have too much practical advice on writing goods tests (or scenarios), and it was my hope that "Fifty quick ideas" would.

First, a few comments on the book, before I'll highlight some parts. I thought it was quite an interesting book, covering several underrepresented areas of testing (including finding out what to test, and how to write good scenarios). The book has relevant suggestions for acceptance testing which are equally applicable to unit testing. I find this quite surprising, since testing books in general offer only suggestions for a specific type of test, leaving a developer/reader (including myself) with the idea that acceptance testing is very different from unit testing, and that it requires both a different approach and a different testing tool. This isn't true at all, and I like how the authors make a point of not making a distinction in this book.

The need to describe why

As a test author you may often feel like you're in a hurry. You've written some code, and now you need to quickly verify that the code you've written actually works. So you create a test file which exercises the production code. Green light proves that everything is okay. Most likely you've tested some methods, verified return values or calls to collaborating objects. Writing tests like that verifies that your code is executable, that it does what you expect from it, but it doesn't describe why the outcomes are the way they are.

In line with our habit of testing things by verifying outputs based on inputs, the PhpStorm IDE offers to generate test methods for selected methods of the Subject-Under-Test (SUT).

I always cry a bit when I see this, because it implies two things:

  1. I've determined the API (and probably wrote most of the code already) before worrying about testing.
  2. I'm supposed to test my code method-by-method.
  3. A generated name following the template test{NameOfMethodOnSUT}() is fine.

At the risk of shaming you into adopting a test-first approach, please consider how you'll test the code before writing it. Also, aim for the lowest possible number of methods on a class. Make sure the remaining methods are conceptually related. The SOLID principles will give you plenty of advice on this topic.

Anyway, if we're talking about writing tests using some XUnit framework (e.g. JUnit, PHPUnit), don't auto-generate test methods. Instead:

Name the test class {NameOfConcept}Test, and add public methods which, combined, completely describe or specify the concept or thing. I find it very helpful to start these method names with "it", to refer to the thing I'm testing. But this is by no means a strict rule. What's important is that you'll end up with methods you can read out loud to anyone interested (not just other developers, but product owners, domain experts, etc.). You can easily check if you've been successful at it by running PHPUnit with the --testdox flag, which produces this human-readable description of your unit.

Rules and examples

There's much more to it than following this simple recipe though. As the book proposes, every scenario (or unit test case), should first describe some rule, or acceptance criterion. The steps of the test itself should then provide a clear example of the consequences of this rule.

Rules are generic, abstract. The examples are specific, concrete. For instance, a rule could be: "You can't schedule

Truncated by Planet PHP, read more at the original (another 6784 bytes)

Categories: Web Technologies

Updated: Become a ClusterControl DBA: Safeguarding your Data

Planet MySQL - Mon, 05/14/2018 - 22:33

In the past four posts of the blog series, we covered deployment of clustering/replication (MySQL/Galera, MySQL Replication, MongoDB & PostgreSQL), management & monitoring of your existing databases and clusters, performance monitoring and health and in the last post, how to make your setup highly available through HAProxy and ProxySQL.

So now that you have your databases up and running and highly available, how do you ensure that you have backups of your data?

You can use backups for multiple things: disaster recovery, to provide production data to test against development or even to provision a slave node. This last case is already covered by ClusterControl. When you add a new (replica) node to your replication setup, ClusterControl will make a backup/snapshot of the master node and use it to build the replica. It can also use an existing backup to stage the replica, in case you want to avoid that extra load on the master. After the backup has been extracted, prepared and the database is up and running, ClusterControl will automatically set up replication.

Creating an Instant Backup

In essence, creating a backup is the same for Galera, MySQL replication, PostgreSQL and MongoDB. You can find the backup section under ClusterControl > Backup and by default you would see a list of created backup of the cluster (if any). Otherwise, you would see a placeholder to create a backup:

From here you can click on the "Create Backup" button to make an instant backup or schedule a new backup:

All created backups can also be uploaded to cloud by toggling "Upload Backup to the Cloud", provided you supply working cloud credentials. By default, all backups older than 31 days will be deleted (configurable via Backup Retention settings) or you can choose to keep it forever or define a custom period.

"Create Backup" and "Schedule Backup" share similar options except the scheduling part and incremental backup options for the latter. Therefore, we are going to look into Create Backup feature (a.k.a instant backup) in more depth.

As all these various databases have different backup tools, there is obviously some difference in the options you can choose. For instance with MySQL you get to choose between mysqldump and xtrabackup (full and incremental). For MongoDB, ClusterControl supports mongodump and mongodb-consistent-backup (beta) while PostgreSQL, pg_dump and pg_basebackup are supported. If in doubt which one to choose for MySQL, check out this blog about the differences and use cases for mysqldump and xtrabackup.

Backing up MySQL and Galera

As mentioned in the previous paragraph, you can make MySQL backups using either mysqldump or xtrabackup (full or incremental). In the "Create Backup" wizard, you can choose which host you want to run the backup on, the location where you want to store the backup files, and its directory and specific schemas (xtrabackup) or schemas and tables (mysqldump).

If the node you are backing up is receiving (production) traffic, and you are afraid the extra disk writes will become intrusive, it is advised to send the backups to the ClusterControl host by choosing "Store on Controller" option. This will cause the backup to stream the files over the network to the ClusterControl host and you have to make sure there is enough space available on this node and the streaming port is opened on the ClusterControl host.

There are also several other options whether you would want to use compression and the compression level. The higher the compression level is, the smaller the backup size will be. However, it requires higher CPU usage for the compression and decompression process.

If you would choose xtrabackup as the method for the backup, it would open up extra options: desync, backup locks, compression and xtrabackup parallel threads/gzip. The desync option is only applicable to desync a node from a Galera cluster. Backup locks uses a new MDL lock type to block updates to non-transactional tables and DDL statements for all tables which is more efficient for InnoDB-specific workload. If you are running on Galera Cluster, enabling this option is recommended.

After scheduling an instant backup you can keep track of the progress of the backup job in the Activity > Jobs:

After it has finished, you should be able to see the a new entry under the backup list.

Backing up PostgreSQL

Similar to the instant backups of MySQL, you can run a backup on your Postgres database. With Postgres backups there are two backup methods supported - pg_dumpall or pg_basebackup. Take note that ClusterControl will always perform a full backup regardless of the chosen backup method.

We have covered this aspect in this details in Become a PostgreSQL DBA - Logical & Physical PostgreSQL Backups.

Backing up MongoDB

For MongoDB, ClusterControl supports the standard mongodump and mongodb-consistent-backup developed by Percona. The latter is still in beta version which provides cluster-consistent point-in-time backups of MongoDB suitable for sharded cluster setups. As the sharded MongoDB cluster consists of multiple replica sets, a config replica set and shard servers, it is very difficult to make a consistent backup using only mongodump.

Note that in the wizard, you don't have to pick a database node to be backed up. ClusterControl will automatically pick the healthiest secondary replica as the backup node. Otherwise, the primary will be selected. When the backup is running, the selected backup node will be locked until the backup process completes.

Scheduling Backups

Now that we have played around with creating instant backups, we now can extend that by scheduling the backups.

The scheduling is very easy to do: you can select on which days the backup has to be made and at what time it needs to run.

For xtrabackup there is an additional feature: incremental backups. An incremental backup will only backup the data that changed since the last backup. Of course, the incremental backups are useless if there would not be full backup as a starting point. Between two full backups, you can have as many incremental backups as you like. But restoring them will take longer.

Once scheduled the job(s) should become visible under the "Scheduled Backup" tab and you can edit them by clicking on the "Edit" button. Like with the instant backups, these jobs will schedule the creation of a backup and you can keep track of the progress via the Activity tab.

Backup List

You can find the Backup List under ClusterControl > Backup and this will give you a cluster level overview of all backups made. Clicking on each entry will expand the row and expose more information about the backup:

Each backup is accompanied with a backup log when ClusterControl executed the job, which is available under "More Actions" button.

Offsite Backup in Cloud

Since we have now a lot of backups stored on either the database hosts or the ClusterControl host, we also want to ensure they don’t get lost in case we face a total infrastructure outage. (e.g. DC on fire or flooded) Therefore ClusterControl allows you to store or copy your backups offsite on cloud. The supported cloud platforms are Amazon S3, Google Cloud Storage and Azure Cloud Storage.

The upload process happens right after the backup is successfully created (if you toggle "Upload Backup to the Cloud") or you can manually click on the cloud icon button of the backup list:

Choose the cloud credential and specify the backup location accordingly:

Restore and/or Verify Backup

From the Backup List interface, you can directly restore a backup to a host in the cluster by clicking on the "Restore" button for the particular backup or click on the "Restore Backup" button:

One nice feature is that it is able to restore a node or cluster using the full and incremental backups as it will keep track of the last full backup made and start the incremental backup from there. Then it will group a full backup together with all incremental backups till the next full backup. This allows you to restore starting from the full backup and applying the incremental backups on top of it.

ClusterControl supports restore on an existing database node or restore and verify on a new standalone host:

These two options are pretty similar, except the verify one has extra options for the new host information. If you follow the restoration wizard, you will need to specify a new host. If "Install Database Software" is enabled, ClusterControl will remove any existing MySQL installation on the target host and reinstall the database software with the same version as the existing MySQL server.

Once the backup is restored and verified, you will receive a notification on the restoration status and the node will be shut down automatically.

Point-in-Time Recovery

For MySQL, both xtrabackup and mysqldump can be used to perform point-in-time recovery and also to provision a new replication slave for master-slave replication or Galera Cluster. A mysqldump PITR-compatible backup contains one single dump file, with GTID info, binlog file and position. Thus, only the database node that produces binary log will have the "PITR compatible" option available:

When PITR compatible option is toggled, the database and table fields are greyed out since ClusterControl will always perform a full backup against all databases, events, triggers and routines of the target MySQL server.

Now restoring the backup. If the backup is compatible with PITR, an option will be presented to perform a Point-In-Time Recovery. You will have two options for that - “Time Based” and “Position Based”. For “Time Based”, you can just pass the day and time. For “Position Based”, you can pass the exact position to where you want to restore. It is a more precise way to restore, although you might need to get the binlog position using the mysqlbinlog utility. More details about point in time recovery can be found in this blog.

Backup Encryption

Universally, ClusterControl supports backup encryption for MySQL, MongoDB and PostgreSQL. Backups are encrypted at rest using AES-256 CBC algorithm. An auto generated key will be stored in the cluster's configuration file under /etc/cmon.d/cmon_X.cnf (where X is the cluster ID):

$ sudo grep backup_encryption_key /etc/cmon.d/cmon_1.cnf backup_encryption_key='JevKc23MUIsiWLf2gJWq/IQ1BssGSM9wdVLb+gRGUv0='

If the backup destination is not local, the backup files are transferred in encrypted format. This feature complements the offsite backup on cloud, where we do not have full access to the underlying storage system.

Final Thoughts

We showed you how to get your data backed up and how to store them safely off site. Recovery is always a different thing. ClusterControl can recover automatically your databases from the backups made in the past that are stored on premises or copied back from the cloud.

Obviously there is more to securing your data, especially on the side of securing your connections. We will cover this in the next blog post!

Tags:  backup clustercontrol MariaDB MongoDB MySQL postgres PostgreSQL xtrabackup
Categories: Web Technologies

MySQL Performence : 8.0 GA and TPCC Workloads

Planet MySQL - Mon, 05/14/2018 - 17:51

Generally TPC-C benchmark workload is considered as one of the #1 references for Database OLTP Performance. On the same time, for MySQL users it's often not something which is seen as "the most compelling" for performance evaluations.. -- well, when you're still fighting to scale with your own very simple queries, any good result on something more complex may only look as "fake" ;-)) So, since a long time Sysbench workloads remained (and will remain) as the main #1 "entry ticket" for MySQL evaluation -- the most simple to install, to use, and to point on some sensible issues (if any). Specially that since new Sysbench version 1.0 a lot of improvements were made in Sysbench code itself, it really scales now, has the lowest ever overhead, and also allowing you to add your own test scenario via extended LUA scripts (and again, with lowest ever overhead) -- so, anyone can easily add whatever kind of different test scenarios and share with others ! (while I'd say "the most compelling test workload" for any given user should be the workload which is the most closely reproducing his production load -- and you can probably do it now with new Sysbench, just try it !).

However, from MySQL Dev side, every given benchmark workload is mostly seen asa yet one (or several ones) problem(s) to resolve. Some of problems are common for many workloads, some are completely different ones, but generally it's never about "something cool" -- and we're just progressing in this long road by fixing one problem after another (to hit yet another one again). So, TPC-C workload for MySQL is just yet another problem to resolve ;-))

Historically the most popular TPC-C implementations for MySQL were :
  • DBT-2 : an open source version of TPC-C
  • TPCC-mysql : another open source version of TPC-C developed by Percona

Both versions were implemented completely differently, but at least both were very "compelling" to MySQL users, as they can run TPC-C workload via SQL queries (and not via stored procedures, which are more popular for other DB vendors).. So, it was up to anyone preferences which of 2 test cases to use (while personally I'd say it was always more simple to install and use TPCC-mysql). However, since new Sysbench is here and Percona now ported their TPCC-mysql to Sysbench -- for me it's no doubt everyone should move to Sysbench-TPCC if interested on TPC-C testing ! (and kudos Percona to make this happen !! ;-))

So, what is good with new Sysbench-TPCC :
  • first of all it's fully integrated with Sysbench, so if you already got Sysbench installed on your server, TPCC workload will just work, as well all your old sysbench scripts around to collect the test results and so on ;-))
  • it also goes more far than original TPC-C test -- it's allowing you to run several TPC-C data sets in parallel (I was doing the same in the past by running several TPCC-mysql or DBT-2 processes on the same time -- which is allowing you to see "what is the next problem") -- but now you have the same out-of-the-box!

From the past testings I've already observed that the most "representative" data set size for TPCC workload is around 1000W (1K "warehouses") -- it's not too small, nor too big to take a long time to generate the data + allocate too much disk space (generally it's using around 100GB in InnoDB once loaded).. -- probably over a time I will also test a x10 times bigger volume (or more), but for the moment the 1000W volume is already big enough to investigate MySQL scalability limits on this workload..

So far, my test scenario will be the following :
  • data set :
    • 1000W (single dataset as in original TPC-C workload)
    • 10x100W (10 datasets of 100W executed in parallel)
  • concurrent users : 1, 2, 4, .. 1024
  • InnoDB Buffer Pool (BP) :
    • 128GB : data set is fully cached in BP (no I/O reads)
    • 32GB : not more than 1/3 of data set can be cached in BP (I/O reads and yet more writes to expect)
  • HW and my.conf are used the same as in the previous article about 8.0 OLTP_RW performance.
    • and as well I'm curious how well MySQL 8.0 is scaling on this workload when 1CPU socket (1S) is used comparing to 2CPU sockets (2S)

Sysbench-TPCC 1000W
here is the result with MySQL 8.0 :
Comments :
  • the above graph is representing the test executed on 1S (left side) and then on 2S (right side)
  • the load is starting with 1 user session, then progressively increased to 2 users, 4, 8, .. 1024
  • as you can see, there is no much difference between 1S and 2S results..

We're scaling on this workload only up to 32 concurrent users, so having more CPU cores could not bring any help here.. And what is the bottleneck ? -- we're hardly hitting index RW-lock contention here :

with such a hot contention the difference between MySQL 8.0 and older versions could not be big ;-))

Sysbench-TPCC 1000W, BP=128GB

Comments :
  • interesting that MySQL 8.0 is still winning here anyway !
  • and even on low load 8.0 is mostly matching TPS of 5.6, which is also very positive
  • (and we may expect even better TPS once the index lock contention is lowered, but this is probably not a simple fix)..
  • no idea why MariaDB is even not matching TPS level of 5.7 (while using InnoDB 5.7)

Sysbench-TPCC 1000W, BP=32GB
Comments :
  • the same 1000W workload, but with BP=32GB, so more IO-bound activity is expected..
  • however, TPS results between BP=128GB and BP=32GB configs are not that much different, right ?
  • this is just because TPCC workload itself is not that much IO-intensive as many could imagine ;-))
  • (well, yes, it'll still involve a lot of I/O writes and reads, but they will be often grouped around the same data, so already cached pages could be re-used)
  • (this is completely opposite to, for ex., Sysbench OLTP_RW workload which with a similar amount of data and configured with BP=32GB will become extremely aggressive on I/O and get TPS decreased by several times)
  • again, no idea about MariaDB..

From the other side, I don't recall to hit the same index lock contention while testing 1000W dataset with DBT-2, so I was very curious to compare it with Sysbench-TPCC 1000W on 2S and with BP=128GB :

DBT-2 1000W -vs- Sysbench-TPCC 1000W (2S, BP=128GB)
Comments :
  • so far, DBT2 workload is on the left side, and Sysbench-TPCC is on the right
  • as you can see, peak TPS level reached on DBT2 is nearly twice higher than on Sysbench-TPCC
  • why ? -- this is a good question ;-))
  • initially I supposed this is due more indexes used in Sysbench-TPCC schema, but removing them did not help..
  • in fact, everything is looking similar, but there is still something which is making Sysbench-TPCC to have a different processing "signature" which is resulting in this intensive index lock contention..
  • would be great if Percona eng. could find what is making this difference, and then making from this an additional test scenario ! -- after what we could definitively forget about DBT2 and use Sysbench-TPCC exclusively ;-))

So far, let's get a look how the things are changing when 10x100W dataset is used :

Sysbench-TPCC 10x100W
here is the result with MySQL 8.0 :
Comments :
  • as in the previous case, the result on 1S is on the left, and 2S on the right side of the graph
  • and you can see here that peak TPS on 2S is more than 50% higher !
  • (but not yet near x2 ;-))
  • as well peak TPS on 1S higher than 1000W result on 1S

This is because by using 10 datasets in parallel we multiplied the number of all tables by 10, which is dividing the initially observed "index lock" contention by 10 too (!) for the same number of concurrent users -- and moving the internal bottleneck to another place, and now it's hitting lock management part (lock_sys mutex contention) :

Comments :
  • as you can see, the index lock contention is still present, but it's divided by 10 now
  • and the presence of lock_sys contention is blocking us from going more far..
  • work in progress, and I'm impatient to see this bottleneck gone ;-))

Ok, and now -- how MySQL 8.0 compares to other versions ?

Sysbench-TPCC 10x100W, BP=128GB
Comments :
  • MySQL 8.0 is still showing the best TPS result on this workload as well !
  • TPS is lower -vs- 5.7 on 512 and 1024 users load due higher lock_sys contention in 8.0
  • (by fixing REDO locking in 8.0 we also made other locks more hot, and this is as expected)
  • NOTE : I could "hide" this TPS decrease by limiting thread concurrency, but I did not do it here intentionally to see all the other locks impact..
  • and yes, index lock itself makes a huge trouble when present -- as we see here x2 times better TPS -vs- 1000W
  • note as well, that MySQL 8.0 is matching 5.6 TPS on low load (which, sadly, was no more the case for 5.7)
  • no idea about MariaDB..

and now the same with 32GB BP :

Sysbench-TPCC 10x100W, BP=32GB
Comments :
  • MySQL 8.0 is still do better than others here too !
  • I'm a bit surprised to see 5.6 to do slightly better on low load (but hope it'll be improved in 8.0 soon)
  • again, TPS is not that much different comparing to BP=128GB config, so the workload is not that much IO-bound as anyone could expect.. -- definitively not something to use as a test case if your target is Storage Performance evaluation..

And I was ready to finish my post here, while Percona published their benchmark results comparing InnoDB -vs- MyRocks on Sysbench-TPCC 10x100W workload ! -- I was happy to see that MyRocks is progressing and doing well, but my main attention was attracted by InnoDB results.. -- as you can see from all the results I've presented above, there is not too much difference when we're going from 128GB BP config to 32GB BP config.. While from Percona results we're seeing exactly opposite.. -- not far from x3 times lower TPS between 128GB and 32GB BP configs ! -- how ever such is possible then ?..

Unfortunately the article is not trying to explain what is going behind, but just showing you the numbers.. -- so, let's try to investigate this a little bit ;-))

First of all, in Percona tests was used 2S 28cores-HT server, so I'll limit my HW setup to 1S and use 24cores-HT only (for sure, it's not the same CPU chips, but at least the number of really concurrent tasks executed in parallel will be similar)..

Then, comparing the configuration settings, the most notable differences are :
  • checksums = ON
  • doublewrite = ON
  • binlog = ON
  • adaptive hash index = ON (AHI)
  • and lower values for : io capacity / io max / lru depth / BP instances / cleaner threads / etc..

From the follow-up Percona results you can see that this TPS drop between 128GB and 32GB BP is NOT related to binlog, so I have at least one option less to investigate ;-))

So, first of all I wanted to re-check the "base line" results with BP=128GB.

The following graph is representing MySQL 8.0 under Sysbench-TPCC 10x100W workload with different config settings -- I'll try to be short and present all the test cases together rather one by one, so you can see all the 4 tests here :
Comments :
  • the test #1 is executed with my config as presented in all the results above
  • the test #2 is the same as #1, but doublewrite=ON and AHI=ON, and you can see a significant TPS drop..
  • however, this TPS drop is exactly because of AHI ! -- and as I've mentioned to PeterZ during his "InnoDB Tutorial" @PerconaLIVE -- as soon as you have data changes in your workload, in the current implementation AHI becomes the bottleneck by itself.. -- so, the only one AHI option you should retain in your mind in this case -- is to switch AHI=OFF ! ;-))
  • so, the test #3 is the same as #2, but with AHI=OFF -- and as you can see, we got our lost TPS back ! ;-))
  • and another observation you may make here is that "doublewrite=ON" is not impacting TPS result at all in the current workload.. -- even it's still not fixed yet in MySQL 8.0
  • (Sunny, please, push the new doublewrite code asap to show people a real power of MySQL 8.0 !)
  • and the test #4 is with : doublewrite=ON, AHI=OFF, checksums=ON (crc32), io capacity=2K, io capacity max=4K, etc. -- mostly the same as Percona config, and you can see TPS on the same level again ;-))
  • NOTE : while using so low IO capacities settings is not resulting here in TPS drops, it's lowering the resistance of MySQL Server instance to activity bursts -- Checkpoint Age is hitting its max limit, and sync flushing waits are already present during the test (aka "furious flushing").. -- so, I'd not suggest it as the right tuning.
  • I don't test the impact of checksums here as it'll be simply not present in this workload (all data are in BP, checksums will be involved only on page writes which is going in background, so zero impact on overall processing)..

Now, let's see the same workload, but with BP=32GB :
Comments :
  • the first test is, again, with my initial config settings, and TPS is not too much lower than with BP=128GB..
  • the test #2 is as test #1, but with doublewrite=ON and AHI=ON, and indeed, not far from x2 TPS drop..
  • let's switch AHI=OFF now as in the previous case..
  • the test #3 is as test #2, but with AHI=OFF, and as expected, we can see increased TPS here ;-))
  • now, what is the impact of checksums ?
  • the test #4 is the same as #3, but with checksums=ON (crc32) -- mostly zero impact on TPS
  • and the test #5 is mostly reusing Percona config, except with AHI=off -- which is slightly lowering TPS..

So far :
  • the biggest impact here is coming from doublewrite=ON
  • and the impact is NOT because we're writing the data twice.. -- but because there is a lock contention in doblewrite code ! -- historically doblewrite was implemented as a small write zone, and as soon as you have many writes going in parallel -- you have a locking fight..
  • the new doublewrite code was implemented by Sunny without any of these limitation, and as soon as your storage is able to follow (e.g. to write twice the same data) -- your TPS will remain the same ! ;-))
  • e.g. in my case I should obtain the same over 10K TPS as you can see in the test #1
  • but Percona is claiming to have it fixed, so that's why this x3 TPS drop in their results between 128GB and 32GB BP configs is surprising me.. -- is it the AHI so much impacting in their tests ?.. -- no idea
  • then, why doublewrite is more impacting in 32GB BP config comparing to 128GB BP ?
    • with 32GB BP we are doing much more I/O :
    • first of all, only 1/3 of data may remain cached in BP, so we'll often Read from storage
    • but before to be able to Read, we need to find first a free page in BP to re-use
    • and if most of pages in BP are "dirty" with changes, we need to Write these changes first before to declare a given page as "free" and ready to re-use
    • which is resulting in much more Writes -vs- 128GB BP config (where you don't have any Reads at all)
  • other point : you should also keep in mind to look on TPS results as on "one whole"
  • e.g. if you'll look on 32 users load, you'll see 7.5K TPS, but if you'll look on 128 users only -- you'll see 5K TPS (or even less, depending on config ;-))
  • and if you're looking for reaching a max possible TPS, your main load level is then around a peak TPS
  • once the peak TPS is reached, your main worry then is only about how to not loose it with higher load..
  • there are many solutions available around (and the most optimal IMHO is with ProxySQL pool) -- and you have as well the old good one -- "thread concurrency" tuning ;-))

So, let's add the test #6 which is the same as test #4 (doublewrite=ON, AHI=OFF, checksums=ON) but with innodb_thread_concurrency=32 :
Comments :
  • as you can see, even on higher load TPS is "improved" now as well ;-))
  • (I'd rather say it's "solved" from contention, as we're not improving here anything, just limiting the concurrency)
  • one day we will have no more TPS drops on high load at all (even with thread concurrency=0), but this day is not yet today (nor tomorrow ;-))

Ok, we're able to "hide" the doublewrite contention, fine ! -- but could we reduce the overall Writes impact here ? (with reduced Writes we'll much less stress doublewrite buffer, means its lock contention will be lower, and probably overall TPS will be higher then ?.. -- And YES, in this workload it's possible ! ;-))

How ?.. -- remind that this TPCC, e.g. pure OLTP workload, and, as I've mentioned before -- the data access is "grouped" (so, some data are re-used from BP cache before to Read another ones). And these workload conditions are perfectly matching the story I've explained in 1M IO-bound QPS article with MySQL 8.0 -- let's try the same test #6, but with InnoDB configured with page size = 4KB, which will be the test #7 on the next graph :

Comments :
  • as you can see, with 4KB page size the TPS level is even higher than in test #1 !! ;-))
  • (note : we're still having doublewrite=ON and checksums=ON)
  • and with the new doublewrite code it should be just the same in all test results here (just mind to switch AHI=OFF ;-))
  • also, as you can see, even with x4 times less RAM for BP (32GB -vs- 128GB) and doublewrite=ON and checksums=ON we're still NOT x3 times worse on TPS, but rather near the same result as with 128GB BP !!

  • Sysbench-TPCC itself still has some surprises (comparing 1000W case with DBT2)
  • MySQL 8.0 is doing better than any other/older version here !
  • (but we're yet far from a good scalability -- work in progress, stay tuned ;-))
  • believe me, you're not yet finished to be surprised by InnoDB ;-))
  • Sunny, please, push to 8.0 the new doublewrite code ASAP !! ;-))

Thank you for using MySQL ! -- stay tuned ;-))

Categories: Web Technologies

React 16- what can it do for you?

Echo JS - Mon, 05/14/2018 - 10:40
Categories: Web Technologies

Installing MySQL 8.0 on Ubuntu 16.04 LTS in Five Minutes

Planet MySQL - Mon, 05/14/2018 - 10:27

Do you want to install MySQL 8.0 on Ubuntu 16.04 LTS? In this quick tutorial, I show you exactly how to do it in five minutes or less.

This tutorial assumes you don’t have MySQL or MariaDB installed. If you do, it’s necessary to uninstall them or follow a slightly more complicated upgrade process (not covered here).

Step 1: Install MySQL APT Repository

Ubuntu 16.04 LTS, also known as Xenial, comes with a choice of MySQL 5.7 and MariaDB 10.0.

If you want to use MySQL 8.0, you need to install the MySQL/Oracle Apt repository first:

wget https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.deb dpkg -i mysql-apt-config_0.8.10-1_all.deb

The MySQL APT repository installation package allows you to pick what MySQL version you want to install, as well as if you want access to Preview Versions. Let’s leave them all as default:

Step 2: Update repository configuration and install MySQL Server

apt-get update apt-get install mysql-server

Note: Do not forget to run “apt-get update”, otherwise you can get an old version of MySQL from Ubuntu repository installed.

The installation process asks you to set a password for the root user:

I recommend you set a root password for increased security. If you do not set a password for the root account, “auth_socket” authentication is enabled. This ensures only the operating system’s “root” user can connect to MySQL Server without a password.

Next, the installation script asks you whether to use Strong Password Encryption or Legacy Authentication:

While using strong passwords is recommend for security purposes, not all applications and drivers support this new authentication method. Going with Legacy Authentication is a safer choice

All Done

You should have MySQL 8.0 Server running. You can test it by connecting to it with a command line client:

As you can see, it takes just a few simple steps to install MySQL 8.0 on Ubuntu 16.04 LTS.

Installing MySQL 8.0 on Ubuntu 16.04 LTS is easy. Go ahead give it a try!

The post Installing MySQL 8.0 on Ubuntu 16.04 LTS in Five Minutes appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Page Transitions for Everyone

CSS-Tricks - Mon, 05/14/2018 - 06:55

As Sarah mentioned in her previous post about page transition using Vue.js, there is plenty of motivation for designers and developers to be building page transitions. Let's consider mobile applications. While mobile applications are evolving, more and more attention is given to the animation experience, while the web pretty much stays the same. Why is that?

Maybe it’s because native app developers spend more time working on those animations. Maybe it’s because users say that’s what they want. Maybe it’s because they know more about the environment in which the app is going to run. All of that helps to improve the experience over time. Overall, it seems like mobile app developers somehow seem to know or care more about user experience.

If we take a look at how mobile apps are designed today, there is very often some sort of animated transition between states. Even ready-to-use native components have some kind of simple animation between states. Developers and designers realized that this little animation helps a user grasp what is happening in the app. It makes the navigation through the app easier and tells the user where they are going within the app.

For example, when you’re navigating to a subcategory, it usually slides in from the right side of the screen, and we all somehow know what that means.

Animation matters. It can be used to improve the user experience.

When you’re developing a website, it’s easy to spend hours making sure the user sees the whole story by way of top-notch transitions, like the movement between gallery pictures or fancy hover effects...but once a user clicks a link, all of that experience falls apart and you’re starting from the beginning. That’s because the page reloads and we don’t have an easy/obvious/native way to handle animations/transitions in that situation.

On the web, most of the effort used to improve the experience is in structure, visual design, or even the performance of the site. There are some elements you can swipe around here and there, but that’s about it. A boring remnant of the time when the web was simply used to navigate through a bunch of text pages later upgraded with some sliding text.

Yes, we actually used to do this.

There are some very fancy websites that are filled with animation or incredible WebGL hieroglyphs in the background. Unfortunately, they are often hard to navigate and your laptop battery is drained in about 15 minutes. But they are certainly nice to look at. Those sites are full animation, but most of it is used to impress you, and not to help you navigate around, or make the experience faster, or make things more accessible for you to browse the site.

The Source of the Problem

All of this raises a big question. We have all the technology to do this page transitions stuff on the web. Why don’t we do it? Is it because we don’t want to? Is it because we don’t think that’s part of our job? Is it so hard that we’d have to charge double for projects and clients aren’t buying it?

Let’s take a look at another possible reason. Often, a company chooses technologies that are common for all of their projects. You, as a front-ender, don't have much control over what’s implemented on the server, so maybe you can't count on some server-side rendering of your JSX.

Maybe you have to choose something stable and generally usable for any sort of solution (which usually means the less codebase, the more flexible), so you’re kind of forced to avoid some framework in your development stack. Unfortunately, it makes sense. You don’t want to implement a contact form in 10 different frameworks just because you’ve decided that some framework is a good base for this particular project. And don't get me started on security, which has been in the process of being improved throughout the years. You cannot just throw it away because you want your user to have a bit more fun browsing your site.

There is another possible reason. Maybe you want to build on WordPress because you want to take advantage of all those open source goodies people prepared for you over the years. Would you exchange all of that "free" functionality for a better experience? Probably not...

The Goal

Aside from drastic and unrealistic changes like getting rid of all images and videos, our website load time from the server just is what it is. We have a server rendered page, and apart from a faster server or better caching, there isn’t much to do to improve that.

So the way to improve the load time is to cheat a bit! Cheat by making the user think it takes less time because they are distracted by what’s happening on the screen. Cheat by loading the content ahead of time and animating the transition.

Let’s introduce some effects that could be used to do the transition. Even simple fade in/fade out can give you a whole different feel, especially with some element indicating loading. Let’s look at some examples.

Note: All of following examples are websites build on PHP and pretty much work without JavaScript as any other site would.

Source: rsts.cz/25let (public production site)

A simple fade out/fade in seamlessly improves experience user has before the click of the link and doesn't act that disturbing as a normal browser reload would. The loading indication on a menu icon ensures a user doesn’t panic in case the page takes little longer to load.

Some interesting animation can be achieved by animating different elements in different ways, which can also be combined with fading.

Source: mcepharma.com (public production site)

Or how about covering up the page with some other element?

Source: delejcotebavi.decathlon.cz (public production site)

Those few hours of work give the website whole new feel by turning your static page into an animated one. And that's without any additional preparation.

However, in case you are already aware of your intention in the design phase, you can adjust the design to your needs. Common elements can be prepared ahead of time.

Source: vyrostlijsme.cz/zaciname (company internal system)

The main navigation element introduced in a form of bubbles plays with the user while the content of the next page is loading. The impact is that user is not bored and knows that something is happening. As the animation starts on the click of the link, the user doesn't even realize he’s been waiting for something.


We have sorted out what we actually want to do, and why do we want to do it… let’s see how can we achieve this.

While there is not much to do on the back-end for this, there is a whole bunch of things you can do on the browser side. A good example is Turbolinks, developed by the Basecamp folks, which takes your website and makes it more app-feeling by enabling content loading with JavaScript. It simply avoids that ugly browser page-to-page reloading jump. Turbolinks deals with browser history, loading, and all that other stuff that would normally happen under the hood of the browser. The key thing here is taking control over loading and replacing content because as long as the browser doesn’t reload, everything that happens on a screen is in our control.

Let’s think of some concepts we can use to improve our experience. The load time is our biggest enemy, so how can we improve it in the browser?


In most cases, the request for loading the next page doesn't have to start only once a user clicks the link. There is another event happening that can indicate users intention to visit the page, and that’s a hover over the link. There are always few hundreds of milliseconds between hover and click on the link, why not use that time to our advantage? Heck, if the majority of your users end up on the next page that is known to you and you have the statistics to prove it, why not even preload it any time after the initial render, while the user is trying to get around the current page?

The following diagram illustrates how many things can happen in parallel and save us some of that precious time, instead of being done one by one. A very common mistake seen in custom solutions for page transition that people make for themselves is the start of the request, which is usually triggered after the page is unloaded/hidden.

Possible process of transition Caching

While this can be a problem for some dynamic sites, static sites are perfect candidates for caching. Why would you load a page twice, when you’re in control of loading and replacing the content? You can save the contents and use it in case the user visits the same page or returns to it with the browser back button.

Or, you can use the cache to preload pages on hover and empty the cache regularly (maybe after each page visit?) to always have the latest version of the page on dynamic sites while still enabling that preload feature.

Interestingly, Progressive Web Apps also "invest" in this area, although the approach is a bit different.


All these concepts are great, but we still need to bring animations to the party.

The control of the page load and replacement can be taken over by us, thus we can play with the contents of the page at any given time. For example, it’s simple enough to add a class to the page while it's loading. This gives us the ability to define another state for the page, or in other words, hide/unload it. When the animation is done and the page is loaded, we are free to replace the content and remove the state class to let the page animate back in again.

Another option is to animate your page "out" with JavaScript, replace the content, and animate it back "in."

There are many possibilities of how to approach this, but in any way, the key feature here is not letting the browser reload.

Ready to Go

Those are three main concepts that will make a lot of difference. The point is, with a little extra effort, it's not that hard to do this yourself, and I encourage you to do so.

It's going to be a bumpy road with loads of possible headaches, as it's not always easy not to break the browser’s native functionality.

However, if you’re more of a load and go type of person, I have put together a little package called swup that sorts out all of those things that Turbolinks does, plus all three of the concepts we’ve covered here. Here's a little demo.

What are the perks of swup?

  • It works exclusively on the front-end, so no server setup is required. Although you can easily implement a transfer of only required data based on X-Requested-With in the request header (little modification of swup is required based on your particular solution).
  • Uses browser history API to ensure correct functionality of your site, as it would work without swup (back/forward buttons and correct route in the URL bar).
  • It does not have to be part of initial load batch and can be loaded and enabled after the initial render.
  • Takes care of the timing, meaning it automatically detects the end of your transitions defined in CSS and, in the meantime, takes care of loading the next page. The whole process is based on promises, so not a single millisecond is wasted.
  • If enabled in options, swup can preload a page when a link is hovered or once it’s encountered in loaded content (with the data-swup-preload attribute).
  • If enabled in options, swup also caches the contents of the pages and doesn’t load the same page twice.
  • Swup emits a whole bunch of events that you can use in your code to enable your JavaScript, analytics, etc.
  • You can define as many elements to be replaced as you want, as long as they are common for all pages. This enables the possibility to animate common elements on the page, while still replacing its parts. The bubble menu in our earlier example above uses this feature, where the check signs on the bubbles are taken from the loaded page, but rest of the bubble stays on the page, so it can be animated.
  • Gives you the possibility to use different animation for transitions between different pages.

For those of you that like your animation done with JavaScript, there is also a version for that called swupjs. While it may be harder to implement animations in JavaScript, it definitely gives you more control.


The animation and seamless experience doesn't have to be strictly limited to native applications or newest technologies. With little effort, time, and a bit of code, even your WordPress site can feel native-like. And while we already love the web the way it is, I think we can always try to make it a little better.

The post Page Transitions for Everyone appeared first on CSS-Tricks.

Categories: Web Technologies