emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Generators: All About the Yield - Nomad PHP

Planet PHP - Tue, 01/02/2018 - 16:41

March 2018 - EU
Presented By
Justin Yost
March 22, 2018
20:00 CET

The post Generators: All About the Yield appeared first on Nomad PHP.

Categories: Web Technologies

You are what you document

CSS-Tricks - Tue, 01/02/2018 - 13:42

There are so many little gems in this piece by Yevgeniy Brikman all about documentation. He digs into a lot more than simply documenting code though and focuses on how we can document every phase of our work, from design to process and beyond.

Here’s my favorite lines that made me sit back and shout “Wahoo!”:

When a developer uses your code, they are really learning a new language, so choose the words in it wisely.

...programs must be written for people to read, and only incidentally for machines to execute.

I like how Yevgeniy suggests that there are kinda two different mindsets that we have to get into when writing code: one for making the dang thing work in the first place, and another for explaining how and why we did a specific way. There’s context-switching that takes place between those different stages of work.

Anyway, when seen in this light, documentation could be much more than a nice-to-have. Instead, it might just be 50% of the work.

Direct Link to ArticlePermalink

You are what you document is a post from CSS-Tricks

Categories: Web Technologies

Two New MySQL Books!

Planet MySQL - Tue, 01/02/2018 - 13:35
There are two new MySQL books both from Apress Press. One is an in depth master course on the subject and the other is a quick introduction.


ProMySQL NDB Cluster is subtitled Master the MySQL Cluster Lifecycle and at nearly 700 pages it is vital resource to anyone that runs or is thinking about running NDB Cluster. The authors, Jesper Wisborg Krogh and Mikiya Okuno, have distilled their vast knowledge of this difficult subject in a detail packed but easily readable book.  MySQL Cluster is much more complex in many areas than a regular MySQL server and here you will find all those details. If you run MySQL NDB Cluster then you need this book. The partitioning information in chapter 2 is worth the price of the book alone.  I am only a third of the way through the book and have found it as clear and concise as any technical book I have read and it is actually an easy read. 

MariaDB and MySQL Common Table Expressions and Window Functions Revealed by Daniel Bartholomew is a slender introduction to CTEs and Window functions.  If you were raised on MySQL and do not know either subject well, then I highly recommend this book.  CTEs are going to have a big impact on the way developers write sub queries and may cause self joins to become extinct.  Windowing functions will bring a new eave of analytical analysis to MySQL. This book is just over 100 pages and has useful examples for novices in either area. 

More books! Well yes, there are more MySQL books in the works so save your pocket change in order to buy them when they appear.
Categories: Web Technologies

Fun with Bugs #59 - On MySQL Bug Reports I am Subscribed to, Part II

Planet MySQL - Tue, 01/02/2018 - 12:15
New Year (that starts on Monday!) gives a good opportunity to change something in our lives, start doing something new, better or different. Let's assume I failed with all these so far, as I am again posting about MySQL bugs here.

Since my previous post on this topic I've subscribed to 15 more MySQL bugs, and being on a combination of public holidays and vacation now gives me a good opportunity to review these bug reports.

Here they are, starting from the most recent:
  • Bug #89065 - "sync_binlog=1 on a busy server and slow binary log filesystem stalls slaves". I do not remember seeing multiple threads in "Finished reading one binlog; switching to next binlog" state, but it would be interesting to see this bug report processed properly.
  • Bug #89051 - "EXPLAIN output is different for same content depending when index was added". The way optimizer decides on "range" vs "ref" access is always interesting. Here, based on a recent comment by Øystein Grøvlen, the bug is actually that "Cost values are not correct when optimizer switch from ref-access to range-access in order to use more key parts".
  • Bug #88914 - "Potential null pointer dereference at pointer node->undo_recs (row0purge.cc)". It's funny to see many bugs becoming private as "security" ones and, at the same time, this bug, where reporter suspects it is exploitable, being "Open" and ignored for more than two weeks...
  • Bug #88891 - "Filtered replication leaves GTID holes with create database if not exists". I can not even explain how much I "like" all kinds of GTIDs I have to deal with, especially such a long lists of GTIDs that may be created in cases described in this report.
  • Bug #88863 - "COUNT(*) can sometimes return bogus value". Now, this is a really funny bug! It must be some race condition, and I'd really prefer to see this bug fixed soon.
  • Bug #88844 - "MySQL crash with InnoDB assertion failure in file pars0pars.cc". Nice crash (that I've never seen before) quickly reproduced by Shane Bester.
  • Bug #88834 - "Uneven slowdown on systems with many users". What can be better to speed up connection than checking the list of users one by one, especially when there are thousands of users?
  • Bug #88827 - "innodb uses too much space in the PK for concurrent inserts into the same table". As Mark Callaghan put it:
    "I am not sure my reproduction details will ever satisfy Sinisa but I don't mind if you don't fix this because I care more about MyRocks today and this bug makes MyRocks look better."We (Facebook's MySQL, MariaDB and Percona server users) do have MyRocks, but why poor Oracle MySQL users should suffer? Let's hope Sinisa Milivojevic will process the bug fast, with all the details clarified there :)
  • Bug #88791 - "Binary log for generated column contains new value as WHERE clause, not old value". Generated columns and binary logging, what could went wrong?
  • Bug #88764 - ""ALTER TABLE MODIFY..." takes time even if leaving table as is". Any simple test cases they come to my mind do NOT let to reproduce this problem, but I feel some potential as soon as more exotic cases like partitioning or data directory settings are considered. Let's wait for bug reporter to clarify.
  • Bug #88720 - "Inconsistent and unsafe FLUSH behavior in terms of replication". Nice summary of problems from Przemyslaw Malkowski. One more reason for me to hate GTIDs, honestly.
  • Bug #88694 - "MySQL accepts wildcard for database name for table level grant but won't use it". One more problem with privileges reported by Daniël van Eeden.
  • Bug #88674  - "Regression CREATE TBL from 5.7.17 to 20 (part #2: innodb_file_per_table = OFF)." and Bug #88673 - "Regression CREATE TBL from 5.7.17 to 20 (part #1: innodb_file_per_table = ON)." - these two bugs were reported by Jean-François Gagné and clearly show some things that are done wrong by Oracle when fixing (private, "security") bugs...
  • Bug #88671 - "ALL + BNL chosen over REF in query plan with a simple SELECT + JOIN". In this case optimizer (probably) does not take costs into account properly when choosing block nested loop join vs usual "ref" index access. Maybe just a matter of missing/wrong statistics also. It would be interesting to find out eventually.
  • Bug #88666 - "I_S FILES : all rows are displayed whatever the user privileges". Yet another bug report from Jocelyn Fournier. I am actually surprised with a number of bugs related to privileges that I notice recently.
  • Bug #88633 - "Auto_increment value on a table is less than max(id) can happen". It seems only MySQL 5.7.x is affected, but not 5.6.x.
  • Bug #88623 - "Modifying virtually generated column is not online". May be by design, but still surprising.
  • Bug #88534 - "XA may lost prepared transaction and cause different between master and slave". XA transactions with MySQL is still a sure way to disaster, I think. See Bug #87526 also that should appear in the next part of this series...
Stay tuned to more posts about MySQL bugs from me in the New Year of 2018!
Categories: Web Technologies

Generated Columns and ProxySQL Instead of Referenced Tables

Planet MySQL - Tue, 01/02/2018 - 11:28

In this post, we’ll look at how to improve queries using generated columns and ProxySQL instead of implementing a referenced table.

Developers and architects don’t always have the time or complete information to properly analyze and design a database. That is why we see tables with more fields than needed, or with incorrect types. The best solution is implementing a change in the database schema and/or application level. In this post, we’ll look at an example of generated columns (using a char field) instead of creating a referenced table, and how using generated columns and ProxySQL avoids changes at the application level.

For this example, I will be using the film table of the Sakila database (with some changes). The original film table had a language_id as tinyint, which refers to the language table:

CREATE TABLE film ( film_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT, title VARCHAR(255) NOT NULL, description TEXT DEFAULT NULL, release_year YEAR DEFAULT NULL, language_id TINYINT UNSIGNED NOT NULL, original_language_id TINYINT UNSIGNED DEFAULT NULL, ..... )ENGINE=InnoDB DEFAULT CHARSET=utf8;

I simplified the design of the table and added the language field as a char(20), as it is in table language:

CREATE TABLE `film` ( `film_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `title` varchar(255) NOT NULL, `description` text, `release_year` year(4) DEFAULT NULL, `language` char(20) NOT NULL, `rating` enum('G','PG','PG-13','R','NC-17') DEFAULT 'G', PRIMARY KEY (`film_id`), KEY `idx_title` (`title`) ) ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8

We inserted 1M records and executed a query that filters by language:

mysql> select count(*) from film where language ='Mandarin'; +----------+ | count(*) | +----------+ | 256000 | +----------+ 1 row in set (0.92 sec) mysql> explain select count(*) from film where language ='Mandarin'; +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | 1 | SIMPLE | film | NULL | ALL | NULL | NULL | NULL | NULL | 1014813 | 10.00 | Using where | +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+-------------+ 1 row in set, 1 warning (0.02 sec)

The explain plan shows that it is performing a full table scan. Adding an index over language speeds up the query from 0.92 sec to 0.09 sec:

mysql> ALTER TABLE film ADD INDEX (language); Query OK, 0 rows affected (4.87 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> select count(*) from film where language ='Mandarin'; +----------+ | count(*) | +----------+ | 256000 | +----------+ 1 row in set (0.09 sec)

And the explain plan shows that it is using the language index:

*************************** 1. row *************************** id: 1 select_type: SIMPLE table: new_film partitions: NULL type: ref possible_keys: language key: language key_len: 60 ref: const rows: 245136 filtered: 100.00 Extra: Using index

Now, we are going to add a generated column and an index over this new field:

ALTER TABLE film ADD COLUMN language_id tinyint GENERATED ALWAYS AS ( CASE language WHEN 'English' THEN 1 WHEN 'Italian' THEN 2 WHEN 'Japanese' THEN 3 WHEN 'Mandarin' THEN 4 ELSE 0 END ) VIRTUAL ; ALTER TABLE film ADD INDEX (language_id);

Finally, we are going to add a rule in ProxySQL with the replace pattern:

replace into mysql_query_rules (rule_id,match_pattern,replace_pattern,apply,active) values (3, "(.*)FROM(.*)film(.*)WHERE(.*)language ='(.*)'(.*)", "1 FROM 2 film 3 WHERE 4 language_id = ( CASE '5' WHEN 'English' THEN 1 WHEN 'Italian' THEN 2 WHEN 'Japanese' THEN 3 WHEN 'Mandarin' THEN 4 ELSE 0 END ) 6",1,1); LOAD MYSQL QUERY RULES TO RUNTIME;

Now, when we execute the query through ProxySQL, it will replace:

SELECT count(*) FROM film WHERE language ='Mandarin';

With:

SELECT count(*) FROM film WHERE language_id = ( CASE 'Mandarin' WHEN 'English' THEN 1 WHEN 'Italian' THEN 2 WHEN 'Japanese' THEN 3 WHEN 'Mandarin' THEN 4 ELSE 0 END )

The explain plan shows it is using the language_id index:

mysql> explain SELECT count(*) FROM film WHERE language ='Mandarin' G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: film partitions: NULL type: ref possible_keys: language_id key: language_id key_len: 2 ref: const rows: 507142 filtered: 100.00 Extra: Using index

And that the query time decreases to 0.06 sec:

mysql> SELECT count(*) FROM film WHERE language ='Mandarin'; +----------+ | count(*) | +----------+ | 256000 | +----------+ 1 row in set (0.06 sec)

This speed up (from 0.09 to 0.06) on the select is because it uses a smaller index, as you can see in the key_len of the explain plan. Performing a range scan over entries of 64 bytes is slower than performing a range scan over entries of 8 bytes (the amount of data to read is less). For instance, the index_length with just the language index is 36274176 (34.6MB), while over the language_id is 14172160 (13.5MB).

Conclusion

Generated columns are not a long-term solution. The long-term solution is to add a table, or have the application internally use a referenced table. However, I found it particularly useful or practical in two scenarios:

  1. When database performance is dropping and change at the application level is not a priority for the business. This effort can be implemented by the DBA team without any other intervention.
  2. When you must show the performance improvement that the change will make before modifying anything at the application level.

Cons, you need to install ProxySQL (if you are testing, you can install it just on the test server), which adds a new layer (more complex) to your infrastructure (you need to use MySQL 5.7 at least, as well). Another thing to take into account is that the writes on the table will be a bit slower because of the newly generated columns and the new indexes.

Categories: Web Technologies

Considering Typehints As Communication - Paul M. Jones

Planet PHP - Tue, 01/02/2018 - 07:51

Typehints help communicate across time and space, to people who may never meet you or who might not be able to interrogate you about your code, so those people can understand how you expect the code to work.

Adding typehints is a succinct, more-complete form of communication than not-adding them. (It is rare, perhaps impossible, for all communication can be fully complete all the time.)

Further, you don’t know in advance which parts of the codebase are going to last for a long time, and which are going to be replaced in relatively short order. It’s probably better to to add the typehints when you know what they are, rather than to wait and see if you’ll “need” them later.

Typehints can be considered low-cost mistake-proofing against misunderstanding in an obvious place (i.e., where they are used), without having to look elsewhere (“just read the tests!” [groan]).

Categories: Web Technologies

Additive Animation with the Web Animations API

CSS-Tricks - Tue, 01/02/2018 - 07:37

These features have not landed in any stable browsers at the time of writing. However, everything discussed is already in Firefox Nightly by default and key parts are in Chrome Canary (with the Experimental Web Platform Features flag enabled), so I recommend using one of those browsers (when reading this article) to see as many of the features in action as possible.

Regardless your preferred method of animation on the web, there will be times that you need to animate the same property in separate animations. Maybe you have a hover effect that scales an image and a click event that triggers a translate — both affecting the transform. By default, those animations do not know anything about the other, and only one will visually be applied (since they are affecting the same CSS property and the other value will be overridden).

element.animate({ transform: ['translateY(0)', 'translateY(10px)'] }, 1000); /* This will completely override the previous animation */ element.animate({ transform: ['scale(1)', 'scale(1.15)'] }, 1500);

The second animation in this Web Animations API example is the only one that would be visually rendered in this example as both animations play at the same time and it was the last one defined.

Sometimes we even have grander ideas where we want a foundational animation and then based on some user interaction change in state we smoothly modify the animation a bit midway without affecting its existing duration, keyframes, or easing. CSS Animations and the current Web Animations API in stable browsers today cannot do this out of the box.

A New Option

The Web Animations specification introduces the composite property (and the related iterationComposite). The default composite is 'replace' and has the behavior we have had for years now where an actively animating property's value simply replaces any previously set value — either from a rule set or another animation.

The 'add' value is where things change from the previous norms.

element.animate({ transform: ['scale(1)', 'scale(1.5)'] }, { duration: 1000, fill: 'both' }); element.animate({ transform: ['rotate(0deg)', 'rotate(180deg)'] }, { duration: 1500, fill: 'both', composite: 'add' });

Now both animations will be seen as the browser on the fly figures out the appropriate transformation at a given point in the element's timeline accounting for both transformations. In our examples, the easing is 'linear' by default and the animations start at the same time, so we can break out what the effective transform is at any given point. Such as:

  • 0ms: scale(1) rotate(0deg)
  • 500ms: scale(1.25) rotate(60deg) (halfway through first animation, 1/3 through second)
  • 1000ms: scale(1.5) rotate(120deg) (end of first, 2/3 through second)
  • 1500ms: scale(1.5) rotate(180deg) (end of second)

See the Pen Animation Composite by Dan Wilson (@danwilson) on CodePen.

So Let's Get Creative

An individual animation does not just consist of a start state and end state — it can have its own easing, iteration count, duration, and more keyframes in the middle. While an element is mid animation you can throw an additional transformation on it with its own timing options.

See the Pen Add more transform animations by Dan Wilson (@danwilson) on CodePen.

This example lets you apply multiple animations on the same element, all affecting the transform property. To keep from going all out in this example, we limit each animation to a single transformation function at a time (such as only a scale), starting at a default value (such as scale(1) or translateX(0)), and ending at a reasonable random value on that same transformation function, repeated infinitely. The next animation will affect another single function with its own randomized duration and easing.

element.animate(getTransform(), //e.g. { transform: ['rotate(0deg), 'rotate(45deg)'] } { duration: getDuration(), //between 1000 and 6000ms iterations: Infinity, composite: 'add', easing: getEasing() //one of two options });

When each animation starts, the browser will effectively find where it is in its previously applied animations and start a new rotation animation with the specified timing options. Even if there is already a rotation going in the opposite direction, the browser will do the math to figure out how much a rotation needs to happen.
Since each animation has its own timing options, you are unlikely to see the exact same motion repeated in this example once you have added a few. This gives the animation a fresh feel as you watch it.

Since each animation in our example starts at the default value (0 for translations and 1 for scaling) we get a smooth start. If we instead had keyframes such as { transform: ['scale(.5)', 'scale(.8)'] } we would get a jump because the didn't have this scale before and all of a sudden starts its animation at half scale.

How are values added?

Transformation values follow the syntax of in the spec, and if you add a transformation you are appending to a list.

For transform animations A, B, and C the resulting computed transform value will be [current value in A] [current value in B] [current value in C]. For example, assume the following three animations:

element.animate({ transform: ['translateX(0)', 'translateX(10px)'] }, 1000); element.animate({ transform: ['translateY(0)', 'translateY(-20px)'] }, { duration:1000, composite: 'add' }); element.animate({ transform: ['translateX(0)', 'translateX(300px)'] }, { duration:1000, composite: 'add' });

Each animation runs for 1 second with a linear easing, so halfway through the animations the resulting transform would have the value translateX(5px) translateY(-10px) translateX(150px). Easings, durations, delays, and more will all affect the value as you go along.

Transforms are not the only thing we can animate, however. Filters (hue-rotate(), blur(), etc) follow a similar pattern where the items are appended to a filter list.

Some properties use a number as a value, such as opacity. Here the numbers will add up to a single sum.

element.animate({ opacity: [0, .1] }, 1000); element.animate({ opacity: [0, .2] }, { duration:1000, composite: 'add' }); element.animate({ opacity: [0, .4] }, { duration:1000, composite: 'add' });

Since each animation again is 1s in duration with a linear easing, we can calculate the resulting value at any point in that animation.

  • 0ms: opacity: 0 (0 + 0 + 0)
  • 500ms: opacity: .35 (.05 + .1 + .2)
  • 1000ms: opacity: .7 (.1 + .2 + .4)

As such, you won't be seeing much if you have several animations that include the value 1 as a keyframe. That is a max value for its visual state, so adding up to values beyond that will look the same as if it were just a 1.

See the Pen Add more opacity animations by Dan Wilson (@danwilson) on CodePen.

Similar to opacity and other properties that accept number values, properties that accept lengths, percentages, or colors will also sum to a single result value. With colors, you must remember they also have a max value, too (whether a max of 255 in rgb() or 100% for saturation/lightness in hsl()), so your result could max out to a white. With lengths, you can switch between units (such as px to vmin) as though it is inside a calc().

For more details, the specification outlines the different types of animation and how the result is calculated.

Working with Fill Modes

When you are not doing an infinite animation (whether you are using a composite or not) by default the animation will not keep its end state as the animation ends. The fill property allows us to change that behavior. If you want to have a smooth transition when you add a finite animation, you likely will want a fill mode of either forwards or both to make sure the end state remains.

See the Pen Spiral: Composite Add + Fill Forwards by Dan Wilson (@danwilson) on CodePen.

This example has an animation with a spiral path by specifying a rotation and a translation. There are two buttons that add new one second animations with an additional small translation. Since they specify fill: 'forwards' each additional translation effectively remains part of the transform list. The expanding (or shrinking) spiral adapts smoothly with each translation adjustment because it is an additive animation from translateX(0) to a new amount and remains at that new amount.

Accumulating animations

The new composite option has a third value — 'accumulate'. It is conceptually in line with 'add' except certain types of animations will behave differently. Keeping with our transform, let's start with a new example using 'add' and then discuss how 'accumulate' is different.

element.animate({ transform: ['translateX(0)', 'translateX(20px)'] }, { duration: 1000, composite: 'add' }); element.animate({ transform: ['translateX(0)', 'translateX(30px)'] }, { duration: 1000, composite: 'add' }); element.animate({ transform: ['scale(1)', 'scale(.5)'] }, { duration: 1000, composite: 'add' });

At the 1 second mark (the end of the animations), the effective value will be:

transform: translateX(20px) translateX(30px) scale(.5)

Which will visually push an element to the right 50px and then scale it down to half width and half height.

If each animation had been using 'accumulate' instead, then the result would be:

transform: translateX(50px) scale(.5)

Which will visually push an element to the right 50px and then scale it down to half width and half height.

No need for a double take, the visual results are in fact the exact same — so how is 'accumulate' any different?

Technically when accumulating a transform animation we are no longer always appending to a list. If a transformation function already exists (such as the translateX() in our example) we will not append the value when we start our second animation. Instead, the inner values (i.e. the length values) will be added and placed in the existing function.

If our visual results are the same, why does the option to accumulate inner values exist?

In the case of transform, order of the list of functions matters. The transformation translateX(20px) translateX(30px) scale(.5) is different than translateX(20px) scale(.5) translateX(30px) because each function affects the coordinate system of the functions that follow it. When you do a scale(.5) in the middle, the latter functions will also happen at the half scale. Therefore with this example the translateX(30px) will visually render as a 15px translation to the right.

See the Pen Visual Reference: Transform Coordinate Systems by Dan Wilson (@danwilson) on CodePen.

Therefore, with accumulation we can have an order that is different then when we always append the values to the list.

Accumulating for Each Iteration

I mentioned before that there is also a new related iterationComposite property. It provides the ability to do some of the behaviors we have already discussed except on a single animation from one iteration to the next.

Unlike composite, this property only has two valid values: 'replace' (the default behavior you already know and love) and 'accumulate'. With 'accumulate' values follow the already discussed accumulation process for lists (as with transform) or are added together for number based properties like opacity.

As a starting example, the visual result for the following two animations would be identical:

intervals.animate([{ transform: `rotate(0deg) translateX(0vmin)`, opacity: 0 }, { transform: `rotate(50deg) translateX(2vmin)`, opacity: .5 }], { duration: 2000, iterations: 2, fill: 'forwards', iterationComposite: 'accumulate' }); intervals2.animate([{ transform: `rotate(0deg) translateX(0vmin)`, opacity: 0 },{ transform: `rotate(100deg) translateX(4vmin)`, opacity: 1 }], { duration: 4000, iterations: 1, fill: 'forwards', iterationComposite: 'replace' //default value });

The first animation is only bumping up its opacity by .5, rotating 50 degrees, and moving 2vmin for 2000 milliseconds. It has our new iterationComposite value and is set to run for 2 iterations. Therefore, when the animation ends, it will have run for 2 * 2000ms and reached an opacity of 1 (2 * .5), rotated 100 degrees (2 * 50deg) and translated 4vmin (2 * 2vmin).

See the Pen Spiral with WAAPI iterationComposite by Dan Wilson (@danwilson) on CodePen.

Great! We just used a new property that is supported in only Firefox Nightly to recreate what we can already do with the Web Animations API (or CSS)!
The more interesting aspects of iterationComposite come into play when you combine it with other items in the Web Animations spec that are coming soon (and also already in Firefox Nightly).

Setting New Effect Options

The Web Animations API as it stands in stable browsers today is largely on par with CSS Animations with some added niceties like a playbackRate option and the ability to jump/seek to different points. However, the Animation object is gaining the ability to update the effect and timing options on already running animations.

See the Pen WAAPI iterationComposite & composite by Dan Wilson (@danwilson) on CodePen.

Here we have an element with two animations affecting the transform property and relying on composite: 'add' — one that makes the element move across the screen horizontally and one moving it vertically in a staggered manner. The end state is a little higher on the screen than the start state of this second animation, and with iterationComposite: 'accumulate' it keeps getting higher and higher. After eight iterations the animation finishes and reverses itself for another eight iterations back down to the bottom of the screen where the process begins again.

We can change how far up the screen the animation goes by changing the number of iterations on the fly. These animations are playing indefinitely, but you can change the dropdown to a different iteration count in the middle of the animation. If you are, for example, going from seven iterations to nine and you are seeing the sixth iteration currently, your animation keeps running as though nothing has changed. However, you will see that instead of starting a reverse after that next (seventh) iteration, it will continue for two more. You can also swap in new keyframes, and the animation timing will remain unchanged.

animation.effect.timing.iterations = 4; animation.effect.setKeyframes([ { transform: 'scale(1)' }, { transform: 'scale(1.2)' } ]);

Modifying animations midway may not be something you will use every day, but since it is something new at the browser level we will be learning of its possibilities as the functionality becomes more widely available. Changing iteration counts could be handy for a game when a user get a bonus round and gameplay continues longer than originally intended. Different keyframes can make sense when a user goes from some error state to a success state.

Where do we go from here?

The new composite options and the ability to change timing options and keyframes open new doors for reactive and choreographed animations. There is also an ongoing discussion in the CSS Working Group about adding this functionality to CSS, even beyond the context of animations — affecting the cascade in a new way. We have time before any of this will land in a stable major browser, but it is exciting to see new options coming and even more exciting to be able to experiment with them today.

Additive Animation with the Web Animations API is a post from CSS-Tricks

Categories: Web Technologies

Premier Open Source Database Conference Call for Papers closing January 12 2018

Planet MySQL - Tue, 01/02/2018 - 02:21

The call for papers for Percona Live Santa Clara 2018 was extended till January 12 2018. This means you still have time to get a submission in.

Topics of interest: MySQL, MongoDB, PostgreSQL & other open source databases. Don’t forget all the upcoming databases too (there’s a long list at db-engines).

I think to be fair, in the catch all “other”, we should also be thinking a lot about things like containerisation (Docker), Kubernetes, Mesosphere, the cloud (Amazon AWS RDS, Microsoft Azure, Google Cloud SQL, etc.), analytics (ClickHouse, MariaDB ColumnStore), and a lot more. Basically anything that would benefit an audience of database geeks whom are looking at it from all aspects.

That’s not to say case studies shouldn’t be considered. People always love to hear about stories from the trenches. This is your chance to talk about just that.

Categories: Web Technologies

My 2018 Databases Wishlist

Planet MySQL - Mon, 01/01/2018 - 16:36

Well, the most important wishes I have for 2018 are a bit out of topic for this blog: forms of organisation without a formal authority, schools not teaching religions, and so on. But in this post, I will write about databases… as usual.

So, here is my whishlist, for what it matters.

More research on Learned Indexes

If you don’t know what I’m talking about, see this paper. Having a data structure faster than B-Trees is exciting. But of course I’d like to see also considerations on write performance.

Progress on using ML for database tuning

See this article. I don’t think that Machine Learning will ever be able to replace (good) DBAs, but having a tool which suggests tuning based on real workload sounds great. It can be a valid help for DBAs.

More research on stored functions transformation

Stored functions are useful but slow. But see this paper. It seems it is possible to transform imperative programs to queries, improving the complexity by some orders of magnitude.

On a side note, MariaDB implemented a lot of syntax from Oracle for stored procedures. While this sounds like a precise commercial strategy, the technical improvement on this area is great. Still, what I’d like to see is better performance, as well as support for external languages.

Galera 4

Let me be clear, I didn’t read any announcement that Galera 4 will be released this year. But they announced exciting news over time, and still the new version isn’t here. At some point, it should be released (hopefully).

Transactional DDL in the MySQL ecosystem

MySQL 8.0 has support for atomic DDL statements. They did it in a good way: it’s engine independent and, while it uses InnoDB information_schema tables, any engine is free to add support for this feature. They stated that this is the basis for transactional DDL, but we are not yet there. MariaDB has a task for transactional DDL.

EDIT: Thanks to Valerii Kravchuk for pointing me MDEV-11424 – Instant ALTER TABLE of failure-free record format changes. It is clearly worth adding it to my wishlist: please Maria, get it done!

Engines, engines, engines

RocksDB is great, please consolidate it. TokuDB can improve in many ways, please don’t stop investing on it. Next version of SPIDER will be in MariaDB 10.3, I hope that the development will be a bit more active in the future.

Don’t kill MyISAM. It is still useful in some cases. For Catawiki use cases, I find it better than InnoDB for temporary tables. Also JFG has a great use case example.

More progress on Tarantool and CockroachDB

Tarantool is a great database, originally NoSQL. It is extremely scriptable (actually it can be seen as a Lua application server) and its modules allow to read and write data from a wide variety of data sources, including MySQL replication. Recently, SQL support has been added.

CockroachDB is an open source RDBMS design to scale geographically. It uses distributed transaction. It also allows to tune the redundancy of data at table level and define replication zones.

Great conferences

I will be both at M18 (I’m not sponsored by my company, but I chosen to go anyway) and Percona Live. At M18 I will give a talk titled Somewhere between schema and schemaless. Of course I also submitted proposal for Percona Live, let’s see if they get accepted.

Categories: Web Technologies

Thank You (2017 Edition)

CSS-Tricks - Mon, 01/01/2018 - 08:12

As 2017 comes to a close, as we do each year, let's take a numbers-based glance back at CSS-Tricks. And more importantly, tip our collective hat to all y'all that come here and read the site.

We really do think of the site as somewhere you come and read. While a lot of people's experience with CSS-Tricks is a place that shows up in search results to find the answer to some web design/dev question (and that's awesome), another way to experience the site is to read it like a magazine. We publish an article (or a couple) nearly every day, from a variety of authors, with the hope that it's interesting and exposes us all to new ideas.

According to Google Analytics, which we've had installed and reported from anonymously since day 1 around here, we had 75 million pageviews this year. Very little of CSS-Tricks makes use of any kind of "single page app" type tech, so that's pretty accurate to people visiting and clicking around. It's down from 77 million last year. I'd like to think that's because of ad blockers, which often block Google Analytics, are up in usage for the visitors of CSS-Tricks, but it's just as likely we're just down a smidge this year. Sessions are also down a smidge at 54 million but Users steady at 21 million.

We were on a publishing role though! We published 595 posts, blowing away last year with only 442, the previous record. We also published 50 pages (i.e. snippets/videos/almanac entries) beating 43 last year. Certainly, we're in favor of quality over quantity, but I think this is a healthy publishing pace when our goal is to be read, in a sense, like a magazine. That has been more clear to me this year. We produce content with complementary goals and one of those goals is that of a magazine. We hope the site is worth reading day after day and week after week. The other is that the content lasts and is referenceable for many years to come. Hopefully thinking of things that way can be a guiding light, balancing news and reference content, while avoiding stuff that is neither.

I always wished there was an easy way to figure out what the most popular articles published that year were, but I still don't have a great way to do that. The top 10 is dominated by our big guides, things like our Guides to Grid, Flexbox, SVG, and centering.

Those top 10 make up about 15% of total traffic, which is a massive slice, but that leaves 85% of traffic as part of the "long tail". That's a satisfying thought when you're in it for the long haul as we are. Not every article a huge top 10 smash hit, but does contribute to the long tail which is a much bigger slice collectively anyway.

For the last bunch of months, we've been using Algolia for search. My plan only has 7 days of analytics retention, which isn't enough data to expose major trends. In looking at a week of data though, you can see some surprising top terms like React, SVG, grid, flexbox, font, border... Another thing that's clear is that on-site search is quite important. Algolia reports ~15,000 queries a day. I don't think that represents "user typed in a thing and submitted it" because the search happens in real-time, so each character typed can result in a query. Still, likely hundreds or low-thousands of searches a day.

I'm going to convert us back to using Google search. I think Algolia is a fantastic product, I just don't have the developer time right now to give it the work it needs.

The location of y'all seems to be spreading out internationally little by little. The United States is down to 22% of traffic from 23% and India back to down to 11% from 12% (meaning more traffic came from elsewhere). Those are the top 2, then it goes UK, Germany, Canada, France, Netherlands, Poland, Russia, Brazil. What would be really interesting is to figure out visitors per-capita. For example, Singapore has a population of 5.6 million and had 111,319 unique users, so you could say about 2% of Singaporeans visited CSS-Tricks last year. Lols probably not, but hey that's what the back-of-the-napkin math shows. Whereas with the 4.6 million unique visitors from the US compared to the 323 million population means only 1.5% has visited.

We gained about 10,000 newsletter subscribers this year for a total of 31,376. That's about a third of the entire list size. I love our newsletter. I think it has a ton of potential and is always worth reading. To be perfectly honest I'd like to see our newsletter subscriber numbers be higher than our Twitter followers, but that'll be a tall hill to climb.

Search is the origin of 86.6% of the traffic we get. Direct visits and referral links result in another 5% each. Social media just 2.5%. Whenever I look at that I'm reminded of the disproportionate amount of energy spent on it. Still, it aligns with our goal of being a publication people regularly read and being a source of news, so it feels worth it.

Speaking of social media, we rose 44,000 follows on Twitter last year, again an astonishing number, but it's down year-over-year for the last several years. 71,900 likes on Facebook, only rising about 3,000, which isn't bad considering we do hardly anything at all on Facebook. Growth is much more explosive on YouTube. We at 40,123 subscribers there from 32,174 last year, despite only posting a paultry 6 videos.

This is a classic loop in my head: maybe we should have a dedicated social media person! But even part-time employees are expensive... is it worth it? How much more potential is there? Could they add so much value they pay for themselves? What else could they do? And then the questions swirl around in my head so quickly the idea fizzles out. I guess I'll just say if that kind of thing interests you, reach out!

For once, mobile traffic is actually up. 6.2% this year from below 5% last year. Industry-wide, that's rock bottom. CSS-Tricks is just weird that way. A lot of developers b searching stuff at work, unsurprisingly. Less than 1% is tablets. 30% of the mobile traffic is iPhones.

Y'all left about 5,040 comments on the site this year, which is a smidge down year over year from the past few years, but it actually feels significantly up to me, since we've been much more stringent about what comments we publish this year. I do the vast majority of comment moderation and I trash a lot more than I used to. Anything that off-topic, rude, or unhelpful doesn't see the light of day. I hope that doesn't scare you off from commenting. In fact, I hope it encourages it. Anything on-topic and helpful will absolutely be published and won't be lost in a sea of junk.

We've had 20,238 people use our contact form. Don't worry, I love email.

Goal Review

Double newsletter subscribers. We didn't double but we grew by a third, which is pretty strong. Especially since we did little to nothing to promote it. We probably need to do a better job of promoting it and somehow incentivizing signups, especially since it's such a good way to reach people.

More pairing videos. Pretty hard fail here. The main difficulty is scheduling multiple people together, combined with the pressure of producing something worth watching. It's one thing for an audio podcast like ShopTalk where we can schedule someone and just chit-chat about tech. It's another thing to ask someone to pair with you and essentially do live coding. It's still a good idea, it just needs a more serious time commitment and leader. And honestly, probably a sponsor so that it can be worth everyone's time.

Maintain a mostly-daily publishing schedule. Check and check! This is the first year we've actually kept an honest to god schedule, structured after daily posting. We'll be moving forward with that for sure.

Assemble content in more useful ways. We got a good start on this with Guides. We haven't done a ton with them yet, but we have given ourselves a way to build these without too much effort, and I think the potential in them is fantastic.

New Goals

Publish something in a new format. We have a lot of publishing power around here with lots of writers and a solid platform. Let's use it to publish a book or something book-like.

More editorial vision. Admittedly, what we published each day is a bit random. That's not a terrible thing since the site has a niche anyway, but I'd call it a good goal to exert some editorial guidance to what we publish and when. Meaning commissioning and shepherding work that is a good fit for this site and publishing it with timing that makes some sort of sense.

Interesting sponsorship partners. The most rewarding and powerful partnerships between sponsors and publications are ones of mutual respect and admiration. There are loads of businesses out there I think are doing a terrific job of building what the build, and I'd like to forge relationships with them to promote what they do. And do that promotion in a way that we are uniquely able to do. Get in touch if you think you're a part of that company.

Create another very popular page. It's just a matter of time and topic. I'd like to find at least one web development topic that could really use a strong reference page and spend a good amount of time making what we consider a perfect fit for that, with the goal of it really resonating with developers.

Most importantly

Thank you, again, for being a part of this community.

See the Pen Text Animation with CSS - thank you screen! by Nooray Yemon (@yemon) on CodePen.

Thank You (2017 Edition) is a post from CSS-Tricks

Categories: Web Technologies

What You Build

CSS-Tricks - Sat, 12/30/2017 - 07:26

I tweeted this yesterday and it seemed to resonate with some folks:

Just a little reminder that it’s about 100 times more important what you build than how you build it.

— Chris Coyier (@chriscoyier) December 10, 2017

What I was feeling when I wrote that was a little tired of endless discussions on tech minutia and yearning for more focus on what we are building and discussion about why.

If you're a reader of this site, you and I live in the same bubble. It's a nice bubble. It's full of smart people who like to chat about web design and development. I live it and love it.

It's easy to get into a heated discussion about frameworks, what type of class names make the most sense, which optimization techniques are most important, or what part of your code base is should be responsible for styling. Those are great discussions that guide our industry.

But what is more important? The naming convention you chose or if your user can actually book a flight? Which state store library you picked or if you actually had the scarf your user was looking for? Which command line tool pulled your dependencies or whether someone was able to find and read the instructions to send in their court appeal?

I was trying to encourage people to build and think about what they are building rather than get too discouraged about the how. You're building things for people and that's such a big responsibility. One that outweighs technical choices, as important as they seem.

I enjoyed the pushback I got on it though.

Most of it centered around the fact that if you make poor tech choices, that limits the quality of what you build and slows your ability to change and adapt to changing user needs. Fair enough.

Good tech just might lead to directly better features and UX for your users. Fair enough. Good tech might be a differentiator between you and your competition. Fair enough.

My favorite was calling out the story of the three little pigs. If you aren't familiar, there is a Big Bad Wolf that is trying to eat the piggies. Each of them built a house to protect themselves. I imagine you can guess which of the pigs did better: the one who built their house out of hay, or the pig who built their house out of bricks.

Fair enough.

Drew McLellan gets into this in All That Glisters, but focuses on the old vs new tech issue:

There are so many new tools, frameworks, techniques, styles and libraries to learn. You know what? You don’t have to use them. You’re not a bad developer if you use Grunt even though others have switched to Gulp or Brunch or Webpack or Banana Sandwich. It’s probably misguided to spend lots of project time messing around with build tool fashions when your so last year build tool is already doing what you need.

And this gem:

Software, much like people, is born with a whole lot of potential and not much utility. Newborns — both digital and meaty — are exciting and cute but they also lead to sleepless nights and pools of vomit.

He goes on to say that what you are building might help dictate your tech choices. Ah yes, the what. Not only is what your things does litearlly the only thing people care about, it also helps guide tech choices.

What You Build is a post from CSS-Tricks

Categories: Web Technologies

Percona Database Performance Blog Year in Review: Top Blog Posts

Planet MySQL - Fri, 12/29/2017 - 13:27

Let’s look at some of the most popular Percona Database Performance Blog posts in 2017.

The closing of a year lends itself to looking back. And making lists. With the Percona Database Performance Blog, Percona staff and leadership work hard to provide the open source community with insights, technical support, predictions and metrics around multiple open source database software technologies. We’ve had over three and a half million visits to the blog in 2017: thank you! We look forward to providing you with even better articles, news and information in 2018.

As 2017 moves into 2018, let’s take a quick look back at some of the most popular posts on the blog this year.

Top 10 Most Read

These posts had the most number of views (working down from the highest):

Millions of Queries per Second: PostgreSQL and MySQL’s Peaceful Battle at Today’s Demanding Workloads

Straight-up comparisons of different open source databases’ ability to handle today’s workload levels are always popular. Can open source databases cope with millions of queries per second? Many open source advocates would answer “yes.” However, assertions aren’t enough for well-grounded proof. That’s why in this blog post, Sveta Smirnova (Principal Technical  Services Engineer) shared the benchmark testing results from Alexander Korotkov (CEO of Development, Postgres Professional) and herself. The comparative research of PostgreSQL 9.6 and MySQL 5.7 performance is especially valuable for environments with multiple databases.

MySQL vs. MariaDB: Reality Check

Another highly viewed blog is Colin Charles (Chief Technology Evangelist) comparison of MariaDB vs. MySQL (including Percona Server for MySQL). The goal of this blog post is to evaluate, at a higher level, MySQL, MariaDB and Percona Server for MySQL side-by-side to better inform the adoption decision making process. It is largely an unofficial response to published comments from the MariaDB Corporation.

Column Store Database Benchmarks: MariaDB ColumnStore vs. Clickhouse vs. Apache Spark

Alexander Rubin’s (Principal Consultant in Architecture & Projects) blog shares some column store database benchmark results, and compares the query performance of MariaDB ColumnStore v. 1.0.7 (based on InfiniDB), Clickhouse and Apache Spark.The purpose of the benchmark is to see how these three solutions work on a single big server, with many CPU cores and large amounts of RAM. Both systems are massively parallel (MPP) database systems, so they should use many cores for SELECT queries.

Dealing with MySQL Error Code 1215: “Cannot add foreign key constraint”

Our Support customers often come to us with things like “My database deployment fails with error 1215”, “Am trying to create a foreign key and can’t get it working” or “Why am I unable to create a constraint?” Marcos Albe (Principal Support Engineer) looks at how to resolve MySQL error code 1215: “Cannot add foreign key constraint.”

ZFS from a MySQL perspective

This post (first of a series) from Yves Trudeau (Principal Consultant) explores and investigates the use of ZFS with MySQL to see when ZFS can help solve real problems. He examines ZFS and tries to draw parallels with the architecture of MySQL to better understand how ZFS works and behaves.

Overview of Different MySQL Replication Solutions

Although information is plentiful, replication is often incompletely understood, or completely misunderstood. This blog post by Dimitri Vanoverbeke (Solution Engineer) reviews some of the MySQL replication concepts that are part of the MySQL environment (and Percona Server for MySQL specifically). He clarifies some of the misconceptions people have about replication.

Quest for Better Replication in MySQL: Galera vs. Group Replication

Thanks to the Galera plugin, founded by the Codership team, we’ve had the choice between asynchronous and virtually synchronous replication in the MySQL ecosystem for quite a few years already. Moreover, we can choose between at least three software providers: Codership, MariaDB and Percona, each with its own Galera implementation. The situation recently became much more interesting when MySQL Group Replication went into GA (stable) stage in December 2016.

In this post, Przemysław Malkowski (Principal Support Engineer) points out a couple of interesting differences between Group Replication and Galera, which should be helpful to those considering switching from one to another (or if they are planning to test them).

Using NVMe Command Line Tools to Check NVMe Flash Health

Checking the NVMe flash health from the command line for both older and new drives is fairly straightforward. But what does the health information mean? In this blog post, Peter Zaitsev (CEO) investigates both how to check the NVMe flash health on both older and new hardware, as well as a breakdown of the collected information.

ClickHouse: New Open Source Columnar Database

In this blog post, Alexander Rubin (Principal Consultant in Architecture & Projects) decided to try ClickHouse: an open source column-oriented database management system developed by Yandex (it currently powers Yandex.Metrica, the world’s second-largest web analytics platform). He runs the database using Wikipedia page statistics as a data source and compares the results to a previous test of Apache Spark and MySQL.

Setup ProxySQL for High Availability (not a Single Point of Failure)

Percona has had a lot of opportunities to present and discuss a very powerful tool that is more and more used in the architectures supporting MySQL: ProxySQL. In this blog, Marco Tusa (High Availability Practice Manager) looks at how to set up ProxySQL for high availability.

Honorable Mention

MongoDB 3.4 Views

Views are often used in relational databases to achieve both data security and a high level of abstraction, making it easier to retrieve data. In this blog post, Adamo Tonete (Senior Support Engineer) looks at how to set up views in MongoDB 3.4.

Top 10 Most Commented

These posts generated some healthy discussions (not surprisingly, this list overlaps with the first):

Posts Worth Revisiting

Don’t miss these great posts that have excellent information on important topics:

Have a great end of the year celebration, and we look forward to providing more great blog posts in 2018.

Categories: Web Technologies

Pages