emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Generated Columns and ProxySQL Instead of Referenced Tables

Planet MySQL - Tue, 01/02/2018 - 11:28

In this post, we’ll look at how to improve queries using generated columns and ProxySQL instead of implementing a referenced table.

Developers and architects don’t always have the time or complete information to properly analyze and design a database. That is why we see tables with more fields than needed, or with incorrect types. The best solution is implementing a change in the database schema and/or application level. In this post, we’ll look at an example of generated columns (using a char field) instead of creating a referenced table, and how using generated columns and ProxySQL avoids changes at the application level.

For this example, I will be using the film table of the Sakila database (with some changes). The original film table had a language_id as tinyint, which refers to the language table:


I simplified the design of the table and added the language field as a char(20), as it is in table language:

CREATE TABLE `film` ( `film_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `title` varchar(255) NOT NULL, `description` text, `release_year` year(4) DEFAULT NULL, `language` char(20) NOT NULL, `rating` enum('G','PG','PG-13','R','NC-17') DEFAULT 'G', PRIMARY KEY (`film_id`), KEY `idx_title` (`title`) ) ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8

We inserted 1M records and executed a query that filters by language:

mysql> select count(*) from film where language ='Mandarin'; +----------+ | count(*) | +----------+ | 256000 | +----------+ 1 row in set (0.92 sec) mysql> explain select count(*) from film where language ='Mandarin'; +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | 1 | SIMPLE | film | NULL | ALL | NULL | NULL | NULL | NULL | 1014813 | 10.00 | Using where | +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+-------------+ 1 row in set, 1 warning (0.02 sec)

The explain plan shows that it is performing a full table scan. Adding an index over language speeds up the query from 0.92 sec to 0.09 sec:

mysql> ALTER TABLE film ADD INDEX (language); Query OK, 0 rows affected (4.87 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> select count(*) from film where language ='Mandarin'; +----------+ | count(*) | +----------+ | 256000 | +----------+ 1 row in set (0.09 sec)

And the explain plan shows that it is using the language index:

*************************** 1. row *************************** id: 1 select_type: SIMPLE table: new_film partitions: NULL type: ref possible_keys: language key: language key_len: 60 ref: const rows: 245136 filtered: 100.00 Extra: Using index

Now, we are going to add a generated column and an index over this new field:

ALTER TABLE film ADD COLUMN language_id tinyint GENERATED ALWAYS AS ( CASE language WHEN 'English' THEN 1 WHEN 'Italian' THEN 2 WHEN 'Japanese' THEN 3 WHEN 'Mandarin' THEN 4 ELSE 0 END ) VIRTUAL ; ALTER TABLE film ADD INDEX (language_id);

Finally, we are going to add a rule in ProxySQL with the replace pattern:

replace into mysql_query_rules (rule_id,match_pattern,replace_pattern,apply,active) values (3, "(.*)FROM(.*)film(.*)WHERE(.*)language ='(.*)'(.*)", "1 FROM 2 film 3 WHERE 4 language_id = ( CASE '5' WHEN 'English' THEN 1 WHEN 'Italian' THEN 2 WHEN 'Japanese' THEN 3 WHEN 'Mandarin' THEN 4 ELSE 0 END ) 6",1,1); LOAD MYSQL QUERY RULES TO RUNTIME;

Now, when we execute the query through ProxySQL, it will replace:

SELECT count(*) FROM film WHERE language ='Mandarin';


SELECT count(*) FROM film WHERE language_id = ( CASE 'Mandarin' WHEN 'English' THEN 1 WHEN 'Italian' THEN 2 WHEN 'Japanese' THEN 3 WHEN 'Mandarin' THEN 4 ELSE 0 END )

The explain plan shows it is using the language_id index:

mysql> explain SELECT count(*) FROM film WHERE language ='Mandarin' G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: film partitions: NULL type: ref possible_keys: language_id key: language_id key_len: 2 ref: const rows: 507142 filtered: 100.00 Extra: Using index

And that the query time decreases to 0.06 sec:

mysql> SELECT count(*) FROM film WHERE language ='Mandarin'; +----------+ | count(*) | +----------+ | 256000 | +----------+ 1 row in set (0.06 sec)

This speed up (from 0.09 to 0.06) on the select is because it uses a smaller index, as you can see in the key_len of the explain plan. Performing a range scan over entries of 64 bytes is slower than performing a range scan over entries of 8 bytes (the amount of data to read is less). For instance, the index_length with just the language index is 36274176 (34.6MB), while over the language_id is 14172160 (13.5MB).


Generated columns are not a long-term solution. The long-term solution is to add a table, or have the application internally use a referenced table. However, I found it particularly useful or practical in two scenarios:

  1. When database performance is dropping and change at the application level is not a priority for the business. This effort can be implemented by the DBA team without any other intervention.
  2. When you must show the performance improvement that the change will make before modifying anything at the application level.

Cons, you need to install ProxySQL (if you are testing, you can install it just on the test server), which adds a new layer (more complex) to your infrastructure (you need to use MySQL 5.7 at least, as well). Another thing to take into account is that the writes on the table will be a bit slower because of the newly generated columns and the new indexes.

Categories: Web Technologies

Considering Typehints As Communication - Paul M. Jones

Planet PHP - Tue, 01/02/2018 - 07:51

Typehints help communicate across time and space, to people who may never meet you or who might not be able to interrogate you about your code, so those people can understand how you expect the code to work.

Adding typehints is a succinct, more-complete form of communication than not-adding them. (It is rare, perhaps impossible, for all communication can be fully complete all the time.)

Further, you don’t know in advance which parts of the codebase are going to last for a long time, and which are going to be replaced in relatively short order. It’s probably better to to add the typehints when you know what they are, rather than to wait and see if you’ll “need” them later.

Typehints can be considered low-cost mistake-proofing against misunderstanding in an obvious place (i.e., where they are used), without having to look elsewhere (“just read the tests!” [groan]).

Categories: Web Technologies

Additive Animation with the Web Animations API

CSS-Tricks - Tue, 01/02/2018 - 07:37

These features have not landed in any stable browsers at the time of writing. However, everything discussed is already in Firefox Nightly by default and key parts are in Chrome Canary (with the Experimental Web Platform Features flag enabled), so I recommend using one of those browsers (when reading this article) to see as many of the features in action as possible.

Regardless your preferred method of animation on the web, there will be times that you need to animate the same property in separate animations. Maybe you have a hover effect that scales an image and a click event that triggers a translate — both affecting the transform. By default, those animations do not know anything about the other, and only one will visually be applied (since they are affecting the same CSS property and the other value will be overridden).

element.animate({ transform: ['translateY(0)', 'translateY(10px)'] }, 1000); /* This will completely override the previous animation */ element.animate({ transform: ['scale(1)', 'scale(1.15)'] }, 1500);

The second animation in this Web Animations API example is the only one that would be visually rendered in this example as both animations play at the same time and it was the last one defined.

Sometimes we even have grander ideas where we want a foundational animation and then based on some user interaction change in state we smoothly modify the animation a bit midway without affecting its existing duration, keyframes, or easing. CSS Animations and the current Web Animations API in stable browsers today cannot do this out of the box.

A New Option

The Web Animations specification introduces the composite property (and the related iterationComposite). The default composite is 'replace' and has the behavior we have had for years now where an actively animating property's value simply replaces any previously set value — either from a rule set or another animation.

The 'add' value is where things change from the previous norms.

element.animate({ transform: ['scale(1)', 'scale(1.5)'] }, { duration: 1000, fill: 'both' }); element.animate({ transform: ['rotate(0deg)', 'rotate(180deg)'] }, { duration: 1500, fill: 'both', composite: 'add' });

Now both animations will be seen as the browser on the fly figures out the appropriate transformation at a given point in the element's timeline accounting for both transformations. In our examples, the easing is 'linear' by default and the animations start at the same time, so we can break out what the effective transform is at any given point. Such as:

  • 0ms: scale(1) rotate(0deg)
  • 500ms: scale(1.25) rotate(60deg) (halfway through first animation, 1/3 through second)
  • 1000ms: scale(1.5) rotate(120deg) (end of first, 2/3 through second)
  • 1500ms: scale(1.5) rotate(180deg) (end of second)

See the Pen Animation Composite by Dan Wilson (@danwilson) on CodePen.

So Let's Get Creative

An individual animation does not just consist of a start state and end state — it can have its own easing, iteration count, duration, and more keyframes in the middle. While an element is mid animation you can throw an additional transformation on it with its own timing options.

See the Pen Add more transform animations by Dan Wilson (@danwilson) on CodePen.

This example lets you apply multiple animations on the same element, all affecting the transform property. To keep from going all out in this example, we limit each animation to a single transformation function at a time (such as only a scale), starting at a default value (such as scale(1) or translateX(0)), and ending at a reasonable random value on that same transformation function, repeated infinitely. The next animation will affect another single function with its own randomized duration and easing.

element.animate(getTransform(), //e.g. { transform: ['rotate(0deg), 'rotate(45deg)'] } { duration: getDuration(), //between 1000 and 6000ms iterations: Infinity, composite: 'add', easing: getEasing() //one of two options });

When each animation starts, the browser will effectively find where it is in its previously applied animations and start a new rotation animation with the specified timing options. Even if there is already a rotation going in the opposite direction, the browser will do the math to figure out how much a rotation needs to happen.
Since each animation has its own timing options, you are unlikely to see the exact same motion repeated in this example once you have added a few. This gives the animation a fresh feel as you watch it.

Since each animation in our example starts at the default value (0 for translations and 1 for scaling) we get a smooth start. If we instead had keyframes such as { transform: ['scale(.5)', 'scale(.8)'] } we would get a jump because the didn't have this scale before and all of a sudden starts its animation at half scale.

How are values added?

Transformation values follow the syntax of in the spec, and if you add a transformation you are appending to a list.

For transform animations A, B, and C the resulting computed transform value will be [current value in A] [current value in B] [current value in C]. For example, assume the following three animations:

element.animate({ transform: ['translateX(0)', 'translateX(10px)'] }, 1000); element.animate({ transform: ['translateY(0)', 'translateY(-20px)'] }, { duration:1000, composite: 'add' }); element.animate({ transform: ['translateX(0)', 'translateX(300px)'] }, { duration:1000, composite: 'add' });

Each animation runs for 1 second with a linear easing, so halfway through the animations the resulting transform would have the value translateX(5px) translateY(-10px) translateX(150px). Easings, durations, delays, and more will all affect the value as you go along.

Transforms are not the only thing we can animate, however. Filters (hue-rotate(), blur(), etc) follow a similar pattern where the items are appended to a filter list.

Some properties use a number as a value, such as opacity. Here the numbers will add up to a single sum.

element.animate({ opacity: [0, .1] }, 1000); element.animate({ opacity: [0, .2] }, { duration:1000, composite: 'add' }); element.animate({ opacity: [0, .4] }, { duration:1000, composite: 'add' });

Since each animation again is 1s in duration with a linear easing, we can calculate the resulting value at any point in that animation.

  • 0ms: opacity: 0 (0 + 0 + 0)
  • 500ms: opacity: .35 (.05 + .1 + .2)
  • 1000ms: opacity: .7 (.1 + .2 + .4)

As such, you won't be seeing much if you have several animations that include the value 1 as a keyframe. That is a max value for its visual state, so adding up to values beyond that will look the same as if it were just a 1.

See the Pen Add more opacity animations by Dan Wilson (@danwilson) on CodePen.

Similar to opacity and other properties that accept number values, properties that accept lengths, percentages, or colors will also sum to a single result value. With colors, you must remember they also have a max value, too (whether a max of 255 in rgb() or 100% for saturation/lightness in hsl()), so your result could max out to a white. With lengths, you can switch between units (such as px to vmin) as though it is inside a calc().

For more details, the specification outlines the different types of animation and how the result is calculated.

Working with Fill Modes

When you are not doing an infinite animation (whether you are using a composite or not) by default the animation will not keep its end state as the animation ends. The fill property allows us to change that behavior. If you want to have a smooth transition when you add a finite animation, you likely will want a fill mode of either forwards or both to make sure the end state remains.

See the Pen Spiral: Composite Add + Fill Forwards by Dan Wilson (@danwilson) on CodePen.

This example has an animation with a spiral path by specifying a rotation and a translation. There are two buttons that add new one second animations with an additional small translation. Since they specify fill: 'forwards' each additional translation effectively remains part of the transform list. The expanding (or shrinking) spiral adapts smoothly with each translation adjustment because it is an additive animation from translateX(0) to a new amount and remains at that new amount.

Accumulating animations

The new composite option has a third value — 'accumulate'. It is conceptually in line with 'add' except certain types of animations will behave differently. Keeping with our transform, let's start with a new example using 'add' and then discuss how 'accumulate' is different.

element.animate({ transform: ['translateX(0)', 'translateX(20px)'] }, { duration: 1000, composite: 'add' }); element.animate({ transform: ['translateX(0)', 'translateX(30px)'] }, { duration: 1000, composite: 'add' }); element.animate({ transform: ['scale(1)', 'scale(.5)'] }, { duration: 1000, composite: 'add' });

At the 1 second mark (the end of the animations), the effective value will be:

transform: translateX(20px) translateX(30px) scale(.5)

Which will visually push an element to the right 50px and then scale it down to half width and half height.

If each animation had been using 'accumulate' instead, then the result would be:

transform: translateX(50px) scale(.5)

Which will visually push an element to the right 50px and then scale it down to half width and half height.

No need for a double take, the visual results are in fact the exact same — so how is 'accumulate' any different?

Technically when accumulating a transform animation we are no longer always appending to a list. If a transformation function already exists (such as the translateX() in our example) we will not append the value when we start our second animation. Instead, the inner values (i.e. the length values) will be added and placed in the existing function.

If our visual results are the same, why does the option to accumulate inner values exist?

In the case of transform, order of the list of functions matters. The transformation translateX(20px) translateX(30px) scale(.5) is different than translateX(20px) scale(.5) translateX(30px) because each function affects the coordinate system of the functions that follow it. When you do a scale(.5) in the middle, the latter functions will also happen at the half scale. Therefore with this example the translateX(30px) will visually render as a 15px translation to the right.

See the Pen Visual Reference: Transform Coordinate Systems by Dan Wilson (@danwilson) on CodePen.

Therefore, with accumulation we can have an order that is different then when we always append the values to the list.

Accumulating for Each Iteration

I mentioned before that there is also a new related iterationComposite property. It provides the ability to do some of the behaviors we have already discussed except on a single animation from one iteration to the next.

Unlike composite, this property only has two valid values: 'replace' (the default behavior you already know and love) and 'accumulate'. With 'accumulate' values follow the already discussed accumulation process for lists (as with transform) or are added together for number based properties like opacity.

As a starting example, the visual result for the following two animations would be identical:

intervals.animate([{ transform: `rotate(0deg) translateX(0vmin)`, opacity: 0 }, { transform: `rotate(50deg) translateX(2vmin)`, opacity: .5 }], { duration: 2000, iterations: 2, fill: 'forwards', iterationComposite: 'accumulate' }); intervals2.animate([{ transform: `rotate(0deg) translateX(0vmin)`, opacity: 0 },{ transform: `rotate(100deg) translateX(4vmin)`, opacity: 1 }], { duration: 4000, iterations: 1, fill: 'forwards', iterationComposite: 'replace' //default value });

The first animation is only bumping up its opacity by .5, rotating 50 degrees, and moving 2vmin for 2000 milliseconds. It has our new iterationComposite value and is set to run for 2 iterations. Therefore, when the animation ends, it will have run for 2 * 2000ms and reached an opacity of 1 (2 * .5), rotated 100 degrees (2 * 50deg) and translated 4vmin (2 * 2vmin).

See the Pen Spiral with WAAPI iterationComposite by Dan Wilson (@danwilson) on CodePen.

Great! We just used a new property that is supported in only Firefox Nightly to recreate what we can already do with the Web Animations API (or CSS)!
The more interesting aspects of iterationComposite come into play when you combine it with other items in the Web Animations spec that are coming soon (and also already in Firefox Nightly).

Setting New Effect Options

The Web Animations API as it stands in stable browsers today is largely on par with CSS Animations with some added niceties like a playbackRate option and the ability to jump/seek to different points. However, the Animation object is gaining the ability to update the effect and timing options on already running animations.

See the Pen WAAPI iterationComposite & composite by Dan Wilson (@danwilson) on CodePen.

Here we have an element with two animations affecting the transform property and relying on composite: 'add' — one that makes the element move across the screen horizontally and one moving it vertically in a staggered manner. The end state is a little higher on the screen than the start state of this second animation, and with iterationComposite: 'accumulate' it keeps getting higher and higher. After eight iterations the animation finishes and reverses itself for another eight iterations back down to the bottom of the screen where the process begins again.

We can change how far up the screen the animation goes by changing the number of iterations on the fly. These animations are playing indefinitely, but you can change the dropdown to a different iteration count in the middle of the animation. If you are, for example, going from seven iterations to nine and you are seeing the sixth iteration currently, your animation keeps running as though nothing has changed. However, you will see that instead of starting a reverse after that next (seventh) iteration, it will continue for two more. You can also swap in new keyframes, and the animation timing will remain unchanged.

animation.effect.timing.iterations = 4; animation.effect.setKeyframes([ { transform: 'scale(1)' }, { transform: 'scale(1.2)' } ]);

Modifying animations midway may not be something you will use every day, but since it is something new at the browser level we will be learning of its possibilities as the functionality becomes more widely available. Changing iteration counts could be handy for a game when a user get a bonus round and gameplay continues longer than originally intended. Different keyframes can make sense when a user goes from some error state to a success state.

Where do we go from here?

The new composite options and the ability to change timing options and keyframes open new doors for reactive and choreographed animations. There is also an ongoing discussion in the CSS Working Group about adding this functionality to CSS, even beyond the context of animations — affecting the cascade in a new way. We have time before any of this will land in a stable major browser, but it is exciting to see new options coming and even more exciting to be able to experiment with them today.

Additive Animation with the Web Animations API is a post from CSS-Tricks

Categories: Web Technologies

Premier Open Source Database Conference Call for Papers closing January 12 2018

Planet MySQL - Tue, 01/02/2018 - 02:21

The call for papers for Percona Live Santa Clara 2018 was extended till January 12 2018. This means you still have time to get a submission in.

Topics of interest: MySQL, MongoDB, PostgreSQL & other open source databases. Don’t forget all the upcoming databases too (there’s a long list at db-engines).

I think to be fair, in the catch all “other”, we should also be thinking a lot about things like containerisation (Docker), Kubernetes, Mesosphere, the cloud (Amazon AWS RDS, Microsoft Azure, Google Cloud SQL, etc.), analytics (ClickHouse, MariaDB ColumnStore), and a lot more. Basically anything that would benefit an audience of database geeks whom are looking at it from all aspects.

That’s not to say case studies shouldn’t be considered. People always love to hear about stories from the trenches. This is your chance to talk about just that.

Categories: Web Technologies

My 2018 Databases Wishlist

Planet MySQL - Mon, 01/01/2018 - 16:36

Well, the most important wishes I have for 2018 are a bit out of topic for this blog: forms of organisation without a formal authority, schools not teaching religions, and so on. But in this post, I will write about databases… as usual.

So, here is my whishlist, for what it matters.

More research on Learned Indexes

If you don’t know what I’m talking about, see this paper. Having a data structure faster than B-Trees is exciting. But of course I’d like to see also considerations on write performance.

Progress on using ML for database tuning

See this article. I don’t think that Machine Learning will ever be able to replace (good) DBAs, but having a tool which suggests tuning based on real workload sounds great. It can be a valid help for DBAs.

More research on stored functions transformation

Stored functions are useful but slow. But see this paper. It seems it is possible to transform imperative programs to queries, improving the complexity by some orders of magnitude.

On a side note, MariaDB implemented a lot of syntax from Oracle for stored procedures. While this sounds like a precise commercial strategy, the technical improvement on this area is great. Still, what I’d like to see is better performance, as well as support for external languages.

Galera 4

Let me be clear, I didn’t read any announcement that Galera 4 will be released this year. But they announced exciting news over time, and still the new version isn’t here. At some point, it should be released (hopefully).

Transactional DDL in the MySQL ecosystem

MySQL 8.0 has support for atomic DDL statements. They did it in a good way: it’s engine independent and, while it uses InnoDB information_schema tables, any engine is free to add support for this feature. They stated that this is the basis for transactional DDL, but we are not yet there. MariaDB has a task for transactional DDL.

EDIT: Thanks to Valerii Kravchuk for pointing me MDEV-11424 – Instant ALTER TABLE of failure-free record format changes. It is clearly worth adding it to my wishlist: please Maria, get it done!

Engines, engines, engines

RocksDB is great, please consolidate it. TokuDB can improve in many ways, please don’t stop investing on it. Next version of SPIDER will be in MariaDB 10.3, I hope that the development will be a bit more active in the future.

Don’t kill MyISAM. It is still useful in some cases. For Catawiki use cases, I find it better than InnoDB for temporary tables. Also JFG has a great use case example.

More progress on Tarantool and CockroachDB

Tarantool is a great database, originally NoSQL. It is extremely scriptable (actually it can be seen as a Lua application server) and its modules allow to read and write data from a wide variety of data sources, including MySQL replication. Recently, SQL support has been added.

CockroachDB is an open source RDBMS design to scale geographically. It uses distributed transaction. It also allows to tune the redundancy of data at table level and define replication zones.

Great conferences

I will be both at M18 (I’m not sponsored by my company, but I chosen to go anyway) and Percona Live. At M18 I will give a talk titled Somewhere between schema and schemaless. Of course I also submitted proposal for Percona Live, let’s see if they get accepted.

Categories: Web Technologies

Thank You (2017 Edition)

CSS-Tricks - Mon, 01/01/2018 - 08:12

As 2017 comes to a close, as we do each year, let's take a numbers-based glance back at CSS-Tricks. And more importantly, tip our collective hat to all y'all that come here and read the site.

We really do think of the site as somewhere you come and read. While a lot of people's experience with CSS-Tricks is a place that shows up in search results to find the answer to some web design/dev question (and that's awesome), another way to experience the site is to read it like a magazine. We publish an article (or a couple) nearly every day, from a variety of authors, with the hope that it's interesting and exposes us all to new ideas.

According to Google Analytics, which we've had installed and reported from anonymously since day 1 around here, we had 75 million pageviews this year. Very little of CSS-Tricks makes use of any kind of "single page app" type tech, so that's pretty accurate to people visiting and clicking around. It's down from 77 million last year. I'd like to think that's because of ad blockers, which often block Google Analytics, are up in usage for the visitors of CSS-Tricks, but it's just as likely we're just down a smidge this year. Sessions are also down a smidge at 54 million but Users steady at 21 million.

We were on a publishing role though! We published 595 posts, blowing away last year with only 442, the previous record. We also published 50 pages (i.e. snippets/videos/almanac entries) beating 43 last year. Certainly, we're in favor of quality over quantity, but I think this is a healthy publishing pace when our goal is to be read, in a sense, like a magazine. That has been more clear to me this year. We produce content with complementary goals and one of those goals is that of a magazine. We hope the site is worth reading day after day and week after week. The other is that the content lasts and is referenceable for many years to come. Hopefully thinking of things that way can be a guiding light, balancing news and reference content, while avoiding stuff that is neither.

I always wished there was an easy way to figure out what the most popular articles published that year were, but I still don't have a great way to do that. The top 10 is dominated by our big guides, things like our Guides to Grid, Flexbox, SVG, and centering.

Those top 10 make up about 15% of total traffic, which is a massive slice, but that leaves 85% of traffic as part of the "long tail". That's a satisfying thought when you're in it for the long haul as we are. Not every article a huge top 10 smash hit, but does contribute to the long tail which is a much bigger slice collectively anyway.

For the last bunch of months, we've been using Algolia for search. My plan only has 7 days of analytics retention, which isn't enough data to expose major trends. In looking at a week of data though, you can see some surprising top terms like React, SVG, grid, flexbox, font, border... Another thing that's clear is that on-site search is quite important. Algolia reports ~15,000 queries a day. I don't think that represents "user typed in a thing and submitted it" because the search happens in real-time, so each character typed can result in a query. Still, likely hundreds or low-thousands of searches a day.

I'm going to convert us back to using Google search. I think Algolia is a fantastic product, I just don't have the developer time right now to give it the work it needs.

The location of y'all seems to be spreading out internationally little by little. The United States is down to 22% of traffic from 23% and India back to down to 11% from 12% (meaning more traffic came from elsewhere). Those are the top 2, then it goes UK, Germany, Canada, France, Netherlands, Poland, Russia, Brazil. What would be really interesting is to figure out visitors per-capita. For example, Singapore has a population of 5.6 million and had 111,319 unique users, so you could say about 2% of Singaporeans visited CSS-Tricks last year. Lols probably not, but hey that's what the back-of-the-napkin math shows. Whereas with the 4.6 million unique visitors from the US compared to the 323 million population means only 1.5% has visited.

We gained about 10,000 newsletter subscribers this year for a total of 31,376. That's about a third of the entire list size. I love our newsletter. I think it has a ton of potential and is always worth reading. To be perfectly honest I'd like to see our newsletter subscriber numbers be higher than our Twitter followers, but that'll be a tall hill to climb.

Search is the origin of 86.6% of the traffic we get. Direct visits and referral links result in another 5% each. Social media just 2.5%. Whenever I look at that I'm reminded of the disproportionate amount of energy spent on it. Still, it aligns with our goal of being a publication people regularly read and being a source of news, so it feels worth it.

Speaking of social media, we rose 44,000 follows on Twitter last year, again an astonishing number, but it's down year-over-year for the last several years. 71,900 likes on Facebook, only rising about 3,000, which isn't bad considering we do hardly anything at all on Facebook. Growth is much more explosive on YouTube. We at 40,123 subscribers there from 32,174 last year, despite only posting a paultry 6 videos.

This is a classic loop in my head: maybe we should have a dedicated social media person! But even part-time employees are expensive... is it worth it? How much more potential is there? Could they add so much value they pay for themselves? What else could they do? And then the questions swirl around in my head so quickly the idea fizzles out. I guess I'll just say if that kind of thing interests you, reach out!

For once, mobile traffic is actually up. 6.2% this year from below 5% last year. Industry-wide, that's rock bottom. CSS-Tricks is just weird that way. A lot of developers b searching stuff at work, unsurprisingly. Less than 1% is tablets. 30% of the mobile traffic is iPhones.

Y'all left about 5,040 comments on the site this year, which is a smidge down year over year from the past few years, but it actually feels significantly up to me, since we've been much more stringent about what comments we publish this year. I do the vast majority of comment moderation and I trash a lot more than I used to. Anything that off-topic, rude, or unhelpful doesn't see the light of day. I hope that doesn't scare you off from commenting. In fact, I hope it encourages it. Anything on-topic and helpful will absolutely be published and won't be lost in a sea of junk.

We've had 20,238 people use our contact form. Don't worry, I love email.

Goal Review

Double newsletter subscribers. We didn't double but we grew by a third, which is pretty strong. Especially since we did little to nothing to promote it. We probably need to do a better job of promoting it and somehow incentivizing signups, especially since it's such a good way to reach people.

More pairing videos. Pretty hard fail here. The main difficulty is scheduling multiple people together, combined with the pressure of producing something worth watching. It's one thing for an audio podcast like ShopTalk where we can schedule someone and just chit-chat about tech. It's another thing to ask someone to pair with you and essentially do live coding. It's still a good idea, it just needs a more serious time commitment and leader. And honestly, probably a sponsor so that it can be worth everyone's time.

Maintain a mostly-daily publishing schedule. Check and check! This is the first year we've actually kept an honest to god schedule, structured after daily posting. We'll be moving forward with that for sure.

Assemble content in more useful ways. We got a good start on this with Guides. We haven't done a ton with them yet, but we have given ourselves a way to build these without too much effort, and I think the potential in them is fantastic.

New Goals

Publish something in a new format. We have a lot of publishing power around here with lots of writers and a solid platform. Let's use it to publish a book or something book-like.

More editorial vision. Admittedly, what we published each day is a bit random. That's not a terrible thing since the site has a niche anyway, but I'd call it a good goal to exert some editorial guidance to what we publish and when. Meaning commissioning and shepherding work that is a good fit for this site and publishing it with timing that makes some sort of sense.

Interesting sponsorship partners. The most rewarding and powerful partnerships between sponsors and publications are ones of mutual respect and admiration. There are loads of businesses out there I think are doing a terrific job of building what the build, and I'd like to forge relationships with them to promote what they do. And do that promotion in a way that we are uniquely able to do. Get in touch if you think you're a part of that company.

Create another very popular page. It's just a matter of time and topic. I'd like to find at least one web development topic that could really use a strong reference page and spend a good amount of time making what we consider a perfect fit for that, with the goal of it really resonating with developers.

Most importantly

Thank you, again, for being a part of this community.

See the Pen Text Animation with CSS - thank you screen! by Nooray Yemon (@yemon) on CodePen.

Thank You (2017 Edition) is a post from CSS-Tricks

Categories: Web Technologies

What You Build

CSS-Tricks - Sat, 12/30/2017 - 07:26

I tweeted this yesterday and it seemed to resonate with some folks:

Just a little reminder that it’s about 100 times more important what you build than how you build it.

— Chris Coyier (@chriscoyier) December 10, 2017

What I was feeling when I wrote that was a little tired of endless discussions on tech minutia and yearning for more focus on what we are building and discussion about why.

If you're a reader of this site, you and I live in the same bubble. It's a nice bubble. It's full of smart people who like to chat about web design and development. I live it and love it.

It's easy to get into a heated discussion about frameworks, what type of class names make the most sense, which optimization techniques are most important, or what part of your code base is should be responsible for styling. Those are great discussions that guide our industry.

But what is more important? The naming convention you chose or if your user can actually book a flight? Which state store library you picked or if you actually had the scarf your user was looking for? Which command line tool pulled your dependencies or whether someone was able to find and read the instructions to send in their court appeal?

I was trying to encourage people to build and think about what they are building rather than get too discouraged about the how. You're building things for people and that's such a big responsibility. One that outweighs technical choices, as important as they seem.

I enjoyed the pushback I got on it though.

Most of it centered around the fact that if you make poor tech choices, that limits the quality of what you build and slows your ability to change and adapt to changing user needs. Fair enough.

Good tech just might lead to directly better features and UX for your users. Fair enough. Good tech might be a differentiator between you and your competition. Fair enough.

My favorite was calling out the story of the three little pigs. If you aren't familiar, there is a Big Bad Wolf that is trying to eat the piggies. Each of them built a house to protect themselves. I imagine you can guess which of the pigs did better: the one who built their house out of hay, or the pig who built their house out of bricks.

Fair enough.

Drew McLellan gets into this in All That Glisters, but focuses on the old vs new tech issue:

There are so many new tools, frameworks, techniques, styles and libraries to learn. You know what? You don’t have to use them. You’re not a bad developer if you use Grunt even though others have switched to Gulp or Brunch or Webpack or Banana Sandwich. It’s probably misguided to spend lots of project time messing around with build tool fashions when your so last year build tool is already doing what you need.

And this gem:

Software, much like people, is born with a whole lot of potential and not much utility. Newborns — both digital and meaty — are exciting and cute but they also lead to sleepless nights and pools of vomit.

He goes on to say that what you are building might help dictate your tech choices. Ah yes, the what. Not only is what your things does litearlly the only thing people care about, it also helps guide tech choices.

What You Build is a post from CSS-Tricks

Categories: Web Technologies

Percona Database Performance Blog Year in Review: Top Blog Posts

Planet MySQL - Fri, 12/29/2017 - 13:27

Let’s look at some of the most popular Percona Database Performance Blog posts in 2017.

The closing of a year lends itself to looking back. And making lists. With the Percona Database Performance Blog, Percona staff and leadership work hard to provide the open source community with insights, technical support, predictions and metrics around multiple open source database software technologies. We’ve had over three and a half million visits to the blog in 2017: thank you! We look forward to providing you with even better articles, news and information in 2018.

As 2017 moves into 2018, let’s take a quick look back at some of the most popular posts on the blog this year.

Top 10 Most Read

These posts had the most number of views (working down from the highest):

Millions of Queries per Second: PostgreSQL and MySQL’s Peaceful Battle at Today’s Demanding Workloads

Straight-up comparisons of different open source databases’ ability to handle today’s workload levels are always popular. Can open source databases cope with millions of queries per second? Many open source advocates would answer “yes.” However, assertions aren’t enough for well-grounded proof. That’s why in this blog post, Sveta Smirnova (Principal Technical  Services Engineer) shared the benchmark testing results from Alexander Korotkov (CEO of Development, Postgres Professional) and herself. The comparative research of PostgreSQL 9.6 and MySQL 5.7 performance is especially valuable for environments with multiple databases.

MySQL vs. MariaDB: Reality Check

Another highly viewed blog is Colin Charles (Chief Technology Evangelist) comparison of MariaDB vs. MySQL (including Percona Server for MySQL). The goal of this blog post is to evaluate, at a higher level, MySQL, MariaDB and Percona Server for MySQL side-by-side to better inform the adoption decision making process. It is largely an unofficial response to published comments from the MariaDB Corporation.

Column Store Database Benchmarks: MariaDB ColumnStore vs. Clickhouse vs. Apache Spark

Alexander Rubin’s (Principal Consultant in Architecture & Projects) blog shares some column store database benchmark results, and compares the query performance of MariaDB ColumnStore v. 1.0.7 (based on InfiniDB), Clickhouse and Apache Spark.The purpose of the benchmark is to see how these three solutions work on a single big server, with many CPU cores and large amounts of RAM. Both systems are massively parallel (MPP) database systems, so they should use many cores for SELECT queries.

Dealing with MySQL Error Code 1215: “Cannot add foreign key constraint”

Our Support customers often come to us with things like “My database deployment fails with error 1215”, “Am trying to create a foreign key and can’t get it working” or “Why am I unable to create a constraint?” Marcos Albe (Principal Support Engineer) looks at how to resolve MySQL error code 1215: “Cannot add foreign key constraint.”

ZFS from a MySQL perspective

This post (first of a series) from Yves Trudeau (Principal Consultant) explores and investigates the use of ZFS with MySQL to see when ZFS can help solve real problems. He examines ZFS and tries to draw parallels with the architecture of MySQL to better understand how ZFS works and behaves.

Overview of Different MySQL Replication Solutions

Although information is plentiful, replication is often incompletely understood, or completely misunderstood. This blog post by Dimitri Vanoverbeke (Solution Engineer) reviews some of the MySQL replication concepts that are part of the MySQL environment (and Percona Server for MySQL specifically). He clarifies some of the misconceptions people have about replication.

Quest for Better Replication in MySQL: Galera vs. Group Replication

Thanks to the Galera plugin, founded by the Codership team, we’ve had the choice between asynchronous and virtually synchronous replication in the MySQL ecosystem for quite a few years already. Moreover, we can choose between at least three software providers: Codership, MariaDB and Percona, each with its own Galera implementation. The situation recently became much more interesting when MySQL Group Replication went into GA (stable) stage in December 2016.

In this post, Przemysław Malkowski (Principal Support Engineer) points out a couple of interesting differences between Group Replication and Galera, which should be helpful to those considering switching from one to another (or if they are planning to test them).

Using NVMe Command Line Tools to Check NVMe Flash Health

Checking the NVMe flash health from the command line for both older and new drives is fairly straightforward. But what does the health information mean? In this blog post, Peter Zaitsev (CEO) investigates both how to check the NVMe flash health on both older and new hardware, as well as a breakdown of the collected information.

ClickHouse: New Open Source Columnar Database

In this blog post, Alexander Rubin (Principal Consultant in Architecture & Projects) decided to try ClickHouse: an open source column-oriented database management system developed by Yandex (it currently powers Yandex.Metrica, the world’s second-largest web analytics platform). He runs the database using Wikipedia page statistics as a data source and compares the results to a previous test of Apache Spark and MySQL.

Setup ProxySQL for High Availability (not a Single Point of Failure)

Percona has had a lot of opportunities to present and discuss a very powerful tool that is more and more used in the architectures supporting MySQL: ProxySQL. In this blog, Marco Tusa (High Availability Practice Manager) looks at how to set up ProxySQL for high availability.

Honorable Mention

MongoDB 3.4 Views

Views are often used in relational databases to achieve both data security and a high level of abstraction, making it easier to retrieve data. In this blog post, Adamo Tonete (Senior Support Engineer) looks at how to set up views in MongoDB 3.4.

Top 10 Most Commented

These posts generated some healthy discussions (not surprisingly, this list overlaps with the first):

Posts Worth Revisiting

Don’t miss these great posts that have excellent information on important topics:

Have a great end of the year celebration, and we look forward to providing more great blog posts in 2018.

Categories: Web Technologies

Auto-Sizing Columns in CSS Grid: `auto-fill` vs `auto-fit`

CSS-Tricks - Fri, 12/29/2017 - 06:40

One of the most powerful and convenient CSS Grid features is that, in addition to explicit column sizing, we have the option to repeat-to-fill columns in a Grid, and then auto-place items in them. More specifically, our ability to specify how many columns we want in the grid and then letting the browser handle the responsiveness of those columns for us, showing fewer columns on smaller viewport sizes, and more columns as the screen estate allows for more, without needing to write a single media query to dictate this responsive behavior.

We're able to do that using just one line of CSS — the one-liner that reminds me of when Dumbledore just waved his wand in Horace's apartment and "the furniture flew back to its original places; ornaments reformed in midair, feathers zoomed into their cushions; torn books repaired themselves as they landed upon their shelves...".

This magical, media-query-less responsiveness is achieved using the repeat() function and the auto placement keywords.

Much has been written about this particular one-liner, so I won't be elaborating on how it works. Tim Wright has a great writeup on this that I recommend reading.

To summarize, the repeat() function allows you to repeat columns as many times as needed. For example, if you're creating a 12-columns grid, you could write the following one-liner:

.grid { display: grid; /* define the number of grid columns */ grid-template-columns: repeat(12, 1fr); }

The 1fr is what tells the browser to distribute the space between the columns so that each column equally gets one fraction of that space. That is, they're all fluid, equal-width columns. And the grid will, in this example, always have 12 columns regardless of how wide it is. This, as you have probably guessed, is not good enough, as the content will be too squished on smaller viewports.

So we need to start by specifying a minimum width for the columns, making sure they don't get too narrow. We can do that using the minmax() function.

grid-template-columns: repeat( 12, minmax(250px, 1fr) );

But the way CSS Grid works, this will cause overflow in the row. The columns will not wrap into new rows if the viewport width is too narrow to fit them all with the new minimum width requirement, because we're explicitly telling the browser to repeat the columns 12 times per row.

To achieve wrapping, we can use the auto-fit or auto-fill keywords.

grid-template-columns: repeat( auto-fit, minmax(250px, 1fr) );

These keywords tell the browser to handle the column sizing and element wrapping for us, so that the elements will wrap into rows when the width is not large enough to fit them in without any overflow. The fraction unit we used also ensures that, in case the width allows for a fraction of a column to fit but not a full column, that space will instead be distributed over the column or columns that already fit, making sure we aren't left with any empty space at the end of the row.

At first glace of the names, it might seem like auto-fill and auto-fit are opposites. But in fact, the difference between is quite subtle.

Maybe it seems like you are getting extra space at the end of the column with auto-fit. But when and how?

Let's take a look at what is really happening under the hood.

Fill or Fit? What's the difference?

In a recent CSS workshop, I summarized the difference between auto-fill and auto-fit as follows:

auto-fill FILLS the row with as many columns as it can fit. So it creates implicit columns whenever a new column can fit, because it's trying to FILL the row with as many columns as it can. The newly added columns can and may be empty, but they will still occupy a designated space in the row.

auto-fit FITS the CURRENTLY AVAILABLE columns into the space by expanding them so that they take up any available space. The browser does that after FILLING that extra space with extra columns (as with auto-fill ) and then collapsing the empty ones.

This may sound confusing at first, but it makes a lot more sense when you visualize this behavior. So we'll be doing exactly that, with the Firefox DevTools' Grid Inspector helping us visualize the size and position of our Grid items and columns.

Consider the following demo as an example.

See the Pen auto-fill vs auto-fit by Sara Soueidan (@SaraSoueidan) on CodePen.

The columns are defined using the repeat() function and have a minimum width of 100px, and a maximum set to 1fr , so that they would expand and equally share any extra space when it is available. As for the number of columns per row, we're going to use the auto-placement keywords, so that we let the browser take care of the responsiveness of the grid and will wrap the columns into new rows when needed.

The browser will place and size the columns in the first example using the auto-fill keyword, and it will use auto-fit for the second.

.grid-container--fill { grid-template-columns: repeat(auto-fill, minmax(100px, 1fr)); } .grid-container--fit { grid-template-columns: repeat(auto-fit, minmax(100px, 1fr)); }

Up to a certain point, both auto-fill and auto-fit show identical results.

But they don't have identical behavior under the hood. It just so happens that they will give the same result up to a certain viewport width.

The point at which these two keywords start exhibiting different behaviors depends on the number and size of columns defined in grid-template-columns, so it will differ from one example to another.

The difference between these two keywords is made apparent when the viewport gets wide enough to fit one (or more) extra column(s) (that) into the row. At that point, the browser is presented with two ways to handle the situation, and how it handles it largely depends on whether or not there is content to be placed into that extra column.

So, when the row can fit a new column, the browser goes like:

  1. "I have some space to fit a new column in there. Do I have any content (i.e. grid items) to go into that extra column? Yes? OK, all good. I'll add the column into the row, and it will wrap into a new row on smaller viewports."
  2. In the case where there is no content to place into a new column: "Do I allow this new column to occupy space in the row (and, therefore, affect the position and size of the rest of the rows)? or do I collapse that column and use its space to expand the other columns?"

auto-fill and auto-fit provide the answer to that last question in particular, and dictate how the browser should handle this scenario. To collapse or not to collapse, that is the question. And that is also the answer.
Whether you want it to collapse or not depends on your content, and how you want that content to behave in the context of a responsive design.

Let's see how this works. To visualize the difference between auto-fill and auto-fit, take a look at the following screen recording. I'm resizing the viewport enough to create horizontal space that's enough to fit one (or more) column(s) into the row. Remember that these two rows are identical, and have the exact same of content and column number. The only difference in this demo is that I'm using auto-fill for the first one and auto-fit for the second.

Notice what is happening there? If it's still not clear, the following recording should make it clearer:

auto-fill behavior: "fill that row up! Add as many columns as you can. I don't care if they're empty — they should all still show up. If you have enough space to add a column, add it. I don't care if it's empty or not, it's still occupying space in the row as if it were filled (as in: filled with content/grid items)."

While auto-fill fills the row with as many columns as it can, even if those columns are empty, auto-fit behaves a little differently.
auto-fit does, too, fill the row with more columns are the viewport width increases, but the only difference is that the newly added columns (and any column gaps associated with them) are collapsed. The Grid inspector is a fantastic way to visualize this. You'll notice that columns are being added when you keep your eye on the Grid line numbers, which will increase as the viewport width increases.

auto-fit behavior: "make whatever columns you have fit into the available space. Expand them as much as you need to fit the row size. Empty columns must not occupy any space. Put that space to better use by expanding the filled (as in: filled with content/grid items) columns to fit the available row space."

A useful tip to remember here is that the columns added in both cases (whether collapsed or not) are not implicit columns — that has specific meaning in the spec. In our case, we are adding/creating columns in the explicit grid in the same way as if we declared you wanted 12 columns, for example. So column number -1 will work to target the end of this grid, which it doesn't if you are creating columns in the implicit grid. Props to Rachel Andrew for this tip.

Summing Up

The difference between auto-fill and auto-fit for sizing columns is only noticeable when the row is wide enough to fit more columns in it.

If you're using auto-fit, the content will stretch to fill the entire row width. Whereas with auto-fill, the browser will allow empty columns to occupy space in the row like their non-empty neighbors — they will be allocated a fraction of the space even if they have no grid items in them, thus affecting the size/width of the latter.

Which behavior you want or prefer is completely up to you. I have yet to think of a use case where auto-fill would make more sense than auto-fit. Do you have any use cases? If you do, please feel free to share them in the comments below.

Auto-Sizing Columns in CSS Grid: `auto-fill` vs `auto-fit` is a post from CSS-Tricks

Categories: Web Technologies

This Week in Data with Colin Charles 21: Looking Back on 2017 in the Open Source Database Community

Planet MySQL - Fri, 12/29/2017 - 04:33

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

The end of the year is upon us, and this column is being penned a little earlier than usual as I enjoy a quick bout of wrapping up for the year.

We’ve had a lot of things happen in the database world. It’s clear that open source is everywhere, the cloud is large, and we’ve seen a lot of interesting movements in our MySQL world, from the standpoint of corporate contributions too. GitHub continues to enhance Orchestrator, and gh-ost keeps on getting better; Facebook has ensured MyRocks is now mainstream in both MariaDB Server and Percona Server for MySQL distributions; Alibaba Cloud and Tencent have been contributing to make MariaDB Server better.

We’ve seen the year end with MySQL 8.0 release candidate ready, as well as MariaDB Server 10.3 Beta. We had MariaDB Server 10.2 go GA in May 2017, and we’ve seen a steady stream of improvements in MySQL & Percona Server 5.7 releases.

Vitess I think is still a bigger deal than it’s made out to be. Slack talking about their use case at Percona Live Dublin was good. ProxySQL is a shining star for proxies.

We’ve seen some database vendors pass on — RethinkDB, with very promising technology, started in 2009, raised $12.2m, but finally ended up at Stripe with the code open and relicensed thanks to the Cloud Native Computing Foundation. We’ve seen Basho (makers of Riak), start in 2008, raised $61.2m, ended up at Bet365 and Riak is all open source now.

We’ve seen MongoDB IPO, MariaDB Corporation raise more money (some reported $98.2m in total), and more money still streams into the database world, like for BlazingDB in the GPU space and so on.

What do I look forward to in 2018? Oracle compatibility and PL/SQL in MariaDB Server 10.3 should be interesting. I’ve heard a lot of buzz around Apache Kafka. I expect we’ll see more manageability in MySQL. And also, fun to note that MMAPv1 in MongoDB has been deprecated before MyISAM in MySQL.

Right before the New Year, it is interesting to look at some proxy statistics of database usage, via the WordPress Statistics. Head down to the MySQL versions, and you’ll note that 5.5 gets 41.9% of users (this could be MySQL/Percona Server for MySQL or MariaDB Server), but you’ll also notice at a close second comes 5.6 at 39.8% of the users (this is only MySQL or Percona Server for MySQL). 5.7 gets 6.4% of the users as the 3rd most popular option, followed by 5.1 at 4.3% of users of WordPress (these folk desperately need to upgrade). 10.0 gets 1.6%, while 10.1 gets 3.8% (these are all MariaDB Server versions only). So the death of MySQL has greatly been exaggerated — people like it, people use it, and I can only imagine if more distributions ship 5.7 or 8.0, this could be a win for MySQL.

Releases Link List Upcoming appearances
  • FOSDEM 2018 – Brussels, Belgium – February 3-4 2018
  • SCALE16x – Pasadena, California, USA – March 8-11 2018

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.


Categories: Web Technologies