emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Colored app environment in docker shell - Christian Weiske

Planet PHP - Thu, 11/22/2018 - 12:27

When running a shell in a docker container, you only see random hashes as hostname:

$ docker exec -it project_backend_1 bash root@112adda3eb64:/#

Now imagine having a dozen of terminals open, and then you run ./vendor/bin/phpunit in container 71f68dcd5379. The first thing that the PHPUnit bootstrap script does is emptying the database and then running all migrations and seeds.

Unfortunately, you intendet to run that command in 112adda3eb64, your local development container. Let's just say that 71f68dcd5379 was not the the local dev one, but on a server in a data center, and the data thrown away were kind of important.

Show the environment

To prevent such mistakes in the future, the shell shall clearly show which environment you are in - local development, testing, staging or production.

APP_ENV

This environment is available in our Laravel .env file, but it's not so easy to access in the terminal. So the first step is to add the current environment in the docker-compose.yml file:

--- version: "3" services: backend: image: docker-hub.example.org/project/backend-dev:latest environment: - APP_ENV=local

Now we can access this variable in our shell via $APP_ENV.

Bash prompt

The bash prompt $PS1 is set in two places in the Ubuntu 16.04 images that we used:

/etc/bash.bashrc
Loaded when bash is used, no matter which user
/root/.bashrc
Is loaded after the /etc/ version when the user is root.

Both files define $PS1, so we have to load our bash-coloring file in both of them:

Dockerfile FROM ubuntu:xenial ADD bash.colorprompt /etc/bash.colorprompt RUN echo '. /etc/bash.colorprompt' >> /etc/bash.bashrc\ && echo '. /etc/bash.colorprompt' >> /root/.bashrc

Now the only thing left is to write that file that sets the prompt:

bash.colorprompt # color the prompt according to $APP_ENV variable case "$APP_ENV" in production) PS1='\e[41m\n=== $APP_ENV ===\e[m\n\u@\h:\w\$ ' ;; testing) PS1='\e[43m$APP_ENV\e[m \u@\h:\w\$ ' ;; local) PS1='\e[42m$APP_ENV\e[m \u@\h:\w\$ ' ;; esac Screenshots

PHPUnit?

The obvious question is why PHPUnit was available on that system in the first place.

Our CI server runs unit/integration tests on every deployment, no matter which environment is being deployed to:

  1. Build container with environment-specific configuration
  2. Run tests in container
  3. Deploy and start container on server

While this is in general a good idea, running the tests on the deployment to every environment is something we later stopped doing.

It turned out to be hard to make sure that every single configuration variable is overwritten in phpunit.xml. And if you can't be sure of this, your tests suddenly use some obscure production service that you forgot to stub out.

<script type="text/javascript" src="http://cweiske.de/tagebuch/js/photoswipe/all.min.js"> <script type="text/javascript">cws.init();
Categories: Web Technologies

Caveats With pt-table-checksum Using Row-Based Replication, and Replication Filters

Planet MySQL - Thu, 11/22/2018 - 07:20

As per the documentation, pt-table-checksum is a tool to perform online replication consistency checks by executing checksum queries on the master, which produces different results on replicas that are inconsistent with the master.

The master and each slave insert checksums into the percona.checksums table, and these are later compared for differences. It’s fairly obvious that the checksums need to be determined independently on each node, and so these inserts must be replicated as STATEMENT and not ROW. Otherwise, the slaves would just insert the same checksum as the master and not calculate it independently.

The tool only requires

binlog_format=STATEMENT  for its own session. It sets this itself on the master, but will error if this isn’t already set on each slave node. The reason for the error is that the statement to change the session’s binlog_format will not be replicated. So if a slave has binlog_format=ROW then the slave itself will execute the checksum correctly, but the results will be written as a ROW. Any slaves of this slave in the chain will just insert the same result. See bug 899415.

This is only a problem if we have chained replication, and the error can be skipped with --no-check-binlog-format so for simple replication setups with ROW or MIXED replication we can still use the tool. If we do not have a slave-of-slave style chained topology, then there’s no need to worry about this.

However, there is one caveat to be aware of if you’re using replication filters: when a slave isn’t replicating a particular database due to binlog-ignore-db, this setting behaves differently with ROW based replication (RBR) vs. STATEMENT based.

Per the documentation, with RBR,

binlog-ignore-db=testing

will cause all updates to testing.* to be skipped. With STATEMENT-based replication it will cause all updates after

USE test_database;  to be ignored (regardless of where the updates were being written to).

pt-table-checksum operates in the following way:

use `testing`/*!*/; SET TIMESTAMP=1541583280/*!*/; REPLACE INTO `percona`.`checksums` (db, tbl, chunk, chunk_index, lower_boundary, upper_boundary, this_cnt, this_crc) SELECT ‘testing', 'testing', '1', NULL, NULL, NULL, COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS('#', `id`, CONCAT(ISNULL(`id`)))) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `testing`.`testing` /*checksum table*/

Due to the use testing the slave will then skip these statements with no errors, and simply not write into percona.checksums.

As per the documentation:

The tool monitors replicas continually. If any replica falls too far behind in replication, pt-table-checksum pauses to allow it to catch up. If any replica has an error, or replication stops, pt-table-checksum pauses and waits.

In this case, you will see the tool continually wait, with the following debug output:

# pt_table_checksum:12398 10967 SELECT MAX(chunk) FROM `percona`.`checksums` WHERE db=‘testing’ AND tbl=‘testing’ AND master_crc IS NOT NULL # pt_table_checksum:12416 10967 Getting last checksum on slave1 # pt_table_checksum:12419 10967 slave1 max chunk: undef # pt_table_checksum:12472 10967 Sleep 0.25 waiting for chunks # pt_table_checksum:12416 10967 Getting last checksum on slave1 # pt_table_checksum:12419 10967 slave1 max chunk: undef # pt_table_checksum:12472 10967 Sleep 0.5 waiting for chunks # pt_table_checksum:12416 10967 Getting last checksum on slave1 # pt_table_checksum:12419 10967 slave1 max chunk: undef

We don’t recommend using the tool with replication filters in place, but if --no-check-replication-filters is specified you should be aware of the differences in how different binlog formats handle these filters.

One workaround would be to replace

binlog-ignore-db=testing

With the following which will just ignore writes to that database:

binlog-wild-ignore-table=testing.%

More resources

You can read more about pt-table-checksum in this blog post MySQL Replication Primer with pt-table-checksum and pt-table-sync

The latest version of Percona Toolkit can be downloaded from our website. All Percona software is open source and free.

Photo by Black ice from Pexels

 

Categories: Web Technologies

State of Houdini (Chrome Dev Summit 2018)

CSS-Tricks - Thu, 11/22/2018 - 06:19

Here’s a great talk by Das Surma where he looks into what Houdini is and how much of it is implemented in browsers. If you’re unfamiliar with that, Houdini is a series of technologies and APIs that gives developers low level access to how CSS properties work in a fundamental way. Check out Ana Tudor's deep dive into its impact on animations for some incredible examples of it in practice.

What I particularly like about this video is the way Das mentions the CSS Paint API which lets you do a bunch of bizarre things with CSS, such as creating "squircle" shapes and changing how borders work. It looks wonderfully robust and it should give us super powers in the near future. Ruth John wrote up this extensive overview on the topic earlier this year and it's worth a read as well.

Direct Link to ArticlePermalink

The post State of Houdini (Chrome Dev Summit 2018) appeared first on CSS-Tricks.

Categories: Web Technologies

Add Instant awesomeness to your interfaces with this insanely large icon set

CSS-Tricks - Thu, 11/22/2018 - 06:18

(This is a sponsored post.)

When you need to add icons to your interface, the whole process can really suck. “Should I use all these default bootstrap icons? Maybe I’ll just use the same Google Material icons for the hundredth time?”

Some important yet often overlooked things to consider when choosing an icon set includes, the size of the icons, style, consistency, and quantity. It’s frustrating to find icons that only cover half of the use cases you need.

We constantly felt this frustration too and decided to do something about it. This ultimately led to creating Streamline icon set.

Now in version 3.0, Streamline contains a whopping 30,000 icons in three distinct weights, similar to a font family. There are tons of options to pick the perfect icon for any interface you’re working with, whether it’s a big web application, documentation, or a marketing site.

"I own several icon sets but the one that I return to over and over is the copious Streamline pack, which almost always seems to have just the pictogram I need when I dig into its catalog."

—Khoi Vinh, Adobe

Easy to Use

Streamline has also been meticulously organized into easy-to-navigate categories. You can see all of the categories in our handy dandy web-based icon viewer.

If you’re an IconJar user, you can also search for icons by name and drag and drop them into your project folder. We’re currently under development on this functionality for our web viewer too.

Every Streamline Icon pack comes with the following file types: .svg, .iconjar, .sketch, .fig, .ai, .pdf, .png, .xd.

So now matter how you like to work with icons, you have the file types you need.

"Streamline 3.0 is one of the most versatile and detailed icon packs I’ve ever used. The structure and hierarchy make it super easy to work with. This is an amazing product. Bravo, Vincent."

—Stephanie Walter, UX & UI Designer

Optimized SVG

The SVG versions of Streamline is already dev-ready with proper viewbox tags in place and currentColor set as the color properties for all strokes and fills. You can pop in Streamline using your favorite SVG technique and start changing the color of the icons with CSS right out of the gate.

See the Pen QJQjMm by Matt D. Smith (@mds) on CodePen.

Weights

Every weight—light, regular, and bold—was designed with a very consistent style to give you tons of consistency within your interface.

Light

The classic Streamline style with bits of detail here and there. Designed with 1px stroke on a 24px grid. The Light icons are great for interfaces that need lots of fun personality. They also work well scaled up to 48px as small illustrations.

Regular

A new minimal and geometric style. Designed with a 1.5px stroke on a 24px grid. These are perfect to use on clean and modern web interfaces.

Bold

A new solid style akin to the latest iOS guidelines. Designed with fills and a 2px stroke on a 24px grid. The bold style gives a little more punch for an iOS style interface.

Put Streamline to work for you

There are two different package types available—Essential and Ultimate.

Essential contains 14 categories all related to interfaces and web design, whereas the Ultimate pack contains all 53 categories, including things like Pets, Weather, Finance, Outdoors, Transportation, and so much more.

&#x1f449; Check out the Streamline site to soak in all of the icon glory.

"Vincent's icons are unique, versatile, and easy to work with. I've found them to be super useful across a range of projects."

—Daniel Burka, Resolve to Save Lives

&#x1f913; Some nerdy facts about the Streamline site:

  • Initials designs created in Figma
  • Coded from scratch with .pug, .sass, and .js
  • CodeKit for compiling all of the code
  • Grunt to create a sprite with all of the SVG assets
  • Animations created in After Effects, exported from AE with Lottie into an icon-animation.json format, and added to the site using bodymovin.js
  • Scrollmagic.js was used to manipulate the DOM based on scroll positions for the large icon parallax sections
  • jQuery.js was used to make our lives easier since we’re building a marketing site and not a full-scale application

Direct Link to ArticlePermalink

The post Add Instant awesomeness to your interfaces with this insanely large icon set appeared first on CSS-Tricks.

Categories: Web Technologies

MySQL InnoDB Cluster with 2 Data Centers for Disaster Recovery: howto

Planet MySQL - Thu, 11/22/2018 - 05:21

As you know, MySQL InnoDB Cluster is a High Availability solution for MySQL. However more and more people are trying to use it as a Disaster Recovery solution with 2 data centers. Natively, this is not yet supported. But it’s already possible to realize such setup if we agree with the following points:

  •  a human interaction is required in case of Disaster Recovery which, by my own experience, is often acceptable
  • a human interaction is required if the any Primary-Master acting as asynchronous slave leaves its group  (crash, network problem, …) or becomes secondary

These are not big constraints and it’s relatively easily to deal with them.

The Architecture

The situation is as follow:

  • 2 data centers (one active, one inactive, only used for disaster recovery)
  • 2 MySQL InnoDB Clusters (one in each DC)
  • 3 members in each cluster (to deal with local failure)
  • 1 router in each DC used for asynchronous replication
  • application server(s) with local router in each DC (only those in the active DC should be available, once again, this is a disaster recovery solution)

Setup

The first step is to take a backup from the current production to restore on the cluster (if you need live migration with minimal downtime, please check this post and/or this tutorial)

Example of backup using MySQL Enterprise Backup, aka meb:

/opt/meb/bin/mysqlbackup \ --host 127.0.0.1 \ --backup-dir /vagrant/backup/ \ backup-and-apply-log

This machine had the following set of GTID executed:

mysql> show master status\G *************************** 1. row *************************** File: binlog.000003 Position: 2019696 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: aa410ab6-edb1-11e8-9e34-08002718d305:1-30

Now we will restore the backup on all the servers/nodes in DC1 (mysql1, mysql2 and mysql3) and DC2 (mysql4, mysql5 and mysql6):

# systemctl stop mysqld # rm -rf /var/lib/mysql/* # /opt/meb/bin/mysqlbackup --backup-dir=/vagrant/backup/ copy-back # chown -R mysql. /var/lib/mysql # systemctl start mysqld

Let’s start MySQL-Shell (this can be remote or on one of the server, it doesn’t matter):

# mysqlsh MySQL>JS> dba.configureInstance('clusteradmin@mysql1') MySQL>JS> dba.configureInstance('clusteradmin@mysql2') MySQL>JS> dba.configureInstance('clusteradmin@mysql3') MySQL>JS> dba.configureInstance('clusteradmin@mysql4') MySQL>JS> dba.configureInstance('clusteradmin@mysql5') MySQL>JS> dba.configureInstance('clusteradmin@mysql6')

clusteradmin is a user that was created already in the production server from which we took the backup. The user was created with the following statements:

mysql> create user clusteradmin identified by 'MySQL8isGreat'; mysql> grant all privileges on *.* to 'clusteradmin'@'%' with grant option;

The dba.configureInstance( ) method will ask you to modify 4 variables on a fresh installed MySQL 8.0.13, please confirm and let the shell restart mysqld.

The 4 variables are:

+--------------------------+---------------+----------------+ | Variable | Current Value | Required Value | +--------------------------+---------------+----------------+ | binlog_checksum | CRC32 | NONE | | enforce_gtid_consistency | OFF | ON | | gtid_mode | OFF | ON | | server_id | 1 | | +--------------------------+---------------+----------------+ Cluster Creation

When the restart operation is performed, it’s time to connect to one of the node (I use mysql1) and create the first cluster:

MYSQL>JS> \c clusteradmin@mysql1 Creating a session to 'clusteradmin@mysql1' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 8 (X protocol) Server version: 8.0.13 MySQL Community Server - GPL No default schema selected; type \use to set one. MySQL>JS> cluster=dba.createCluster('clusterDC1') A new InnoDB cluster will be created on instance 'clusteradmin@mysql1:3306'. Validating instance at mysql1:3306... This instance reports its own address as mysql1 Instance configuration is suitable. Creating InnoDB cluster 'clusterDC1' on 'clusteradmin@mysql1:3306'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure.

Let’s verify:

JS> cluster.status() { "clusterName": "clusterDC1", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306" } Adding Instances

We can now add the other members to our cluster:

JS> cluster.addInstance('clusteradmin@mysql2') JS> cluster.addInstance('clusteradmin@mysql3')

Now we have our first cluster ready (the one in DC1):

JS> cluster.status() { "clusterName": "clusterDC1", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306" } Adding members of DC2

Now we will cover the little trick, we must add also the future nodes of DC2 in the actual cluster of DC1. This is only temporary but it’s needed to avoid to play with replication filters for InnoDB Cluster Metadata (that won’t work completely anyway).
So let’s add mysql4, mysql5 and mysql6 too:

JS> cluster.addInstance('clusteradmin@mysql4') JS> cluster.addInstance('clusteradmin@mysql5') JS> cluster.addInstance('clusteradmin@mysql6')

And we can verify the status:

JS> cluster.status() { "clusterName": "clusterDC1", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql5:3306": { "address": "mysql5:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql6:3306": { "address": "mysql6:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306" } Prepare the replication user

On the primary master, in our case mysql1, we need to create a user that will be used for asynchronous replication between the clusters:

mysql> create user 'repl'@'%' identified by 'replication' REQUIRE SSL mysql> grant replication slave on *.* to 'repl'@'%'; MysQL Router Bootstrap

As the MySQL  Router doesn’t yet allow the bootstrap method when more than one cluster is available in the InnoDB Cluster metadata, this is the right time to perform this operation (don’t try to perform it later as it will fail):

# mysqlrouter --bootstrap clusteradmin@mysql1 --user mysqlrouter Note: I've entered a Feature Request #93302. Second InnoDB Cluster Creation

Before the creation of the second cluster, we need  to remove all the 3 nodes from the current cluster (clusterDC1):

JS> cluster.removeInstance('clusteradmin@mysql4') JS> cluster.removeInstance('clusteradmin@mysql5') JS> cluster.removeInstance('clusteradmin@mysql6')

Now clusterDC1 is back at the requested state:

JS> cluster.status() { "clusterName": "clusterDC1", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306" }

But on the nodes of DC2,  we still have the innodb metadata information:

SQL> select * from mysql_innodb_cluster_metadata.clusters\G *************************** 1. row *************************** cluster_id: 1 cluster_name: clusterDC1 default_replicaset: 1 description: Default Cluster mysql_user_accounts: NULL options: null attributes: {"default": true}

It’s time to connect via MySQL Shell to mysql4 and create the second cluster (clusterDC2):

JS> \c clusteradmin@mysql4 Creating a session to 'clusteradmin@mysql4' Fetching schema names for autocompletion... Press ^C to stop. Closing old connection... Your MySQL connection id is 34 (X protocol) Server version: 8.0.13 MySQL Community Server - GPL No default schema selected; type \use to set one. MySQL>JS> cluster2=dba.createCluster('clusterDC2') A new InnoDB cluster will be created on instance 'clusteradmin@mysql4:3306'. The MySQL instance at 'mysql4:3306' currently has the super_read_only system variable set to protect it from inadvertent updates from applications. You must first unset it to be able to perform any changes to this instance. For more information see: https://dev.mysql.com/doc/refman/en/server-system-variables.html#sysvar_super_read_only. Note: there are open sessions to 'mysql4:3306'. You may want to kill these sessions to prevent them from performing unexpected updates: 1 open session(s) of 'root@localhost'. Do you want to disable super_read_only and continue? [y/N]: y Validating instance at mysql4:3306... This instance reports its own address as mysql4 Instance configuration is suitable. Creating InnoDB cluster 'clusterDC2' on 'clusteradmin@mysql4:3306'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure.

As you can see, mysql4 was read only !

We have our second cluster in the cluster2 object:

JS> cluster2.status() { "clusterName": "clusterDC2", "defaultReplicaSet": { "name": "default", "primary": "mysql4:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql4:3306": { "address": "mysql4:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql4:3306" }

We can verify in the InnoDB Cluster Metadata that we have 2 clusters now:

SQL> select * from mysql_innodb_cluster_metadata.clusters\G *************************** 1. row *************************** cluster_id: 1 cluster_name: clusterDC1 default_replicaset: 1 description: Default Cluster mysql_user_accounts: NULL options: null attributes: {"default": true} *************************** 2. row *************************** cluster_id: 2 cluster_name: clusterDC2 default_replicaset: 2 description: Default Cluster mysql_user_accounts: NULL options: null attributes: {"default": true} 2 rows in set (0.0026 sec)

All good !

Before being able to add mysql5 and mysql6to clusterDC2, we need first to setup and start asynchronous replication from clusterDC1 to mysql4. This is required because for the moment, mysql5 and mysql6 have extra transactions (GTID) that mysql4 is not yet aware of. Indeed, when mysql4 was removed from the cluster, the meta data had been modified and this modification was also replicated via the group to mysql5 and mysql6. Same when mysql5 was removed.

Setup & start asynchronous replication from DC1 to DC2

Setup asynchronous replication is very easy when GTID are used. We only need to run the following 2 commands:

mysql> CHANGE MASTER TO MASTER_HOST='mysql1', MASTER_PORT=3306, MASTER_USER='repl', MASTER_PASSWORD='replication', MASTER_AUTO_POSITION=1, MASTER_SSL=1; mysql> START SLAVE;

And we can check that everything is working as expected:

mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: mysql1 Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: binlog.000006 Read_Master_Log_Pos: 54330 Relay_Log_File: mysql4-relay-bin.000002 Relay_Log_Pos: 2875 Relay_Master_Log_File: binlog.000006 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 54330 Relay_Log_Space: 3076 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: Yes Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1247156063 Master_UUID: 5208b04f-edb2-11e8-b3a1-08002718d305 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 9927c511-edb2-11e8-8e54-08002718d305:51-58 Executed_Gtid_Set: 4ac4ffd9-edb2-11e8-a836-08002718d305:1-4, 5208b04f-edb2-11e8-b3a1-08002718d305:1-12, 9927c511-edb2-11e8-8e54-08002718d305:1-58, aa410ab6-edb1-11e8-9e34-08002718d305:1-30, ba9c231d-edb3-11e8-8d74-08002718d305:1-2 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0

mysql4 retrieved all the changes and now has also the missing GTIDs.

Adding the other nodes

We can now add mysql5 and mysql6 to clusterDC2:

JS> cluster2.addInstance('clusteradmin@mysql5') JS> cluster2.addInstance('clusteradmin@mysql6')

And we can check clusterDC2‘s status:

JS> cluster2.status() { "clusterName": "clusterDC2", "defaultReplicaSet": { "name": "default", "primary": "mysql4:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql4:3306": { "address": "mysql4:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql5:3306": { "address": "mysql5:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql6:3306": { "address": "mysql6:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql4:3306" } Asynchronous replication to clusterDC1

We have now our 2 clusters running. It’s time to replicate to clusterDC1 the metadata of clusterDC2.

On mysql1 (the Primary-Master, see status()),  we need to run the following statements:

mysql> CHANGE MASTER TO MASTER_HOST='mysql4', MASTER_PORT=3306, MASTER_USER='repl', MASTER_PASSWORD='replication', MASTER_AUTO_POSITION=1, MASTER_SSL=1; mysql> START SLAVE;

That slave thread can be stopped, certainly as you SHOULD NOT WRITE in DC2. But if you have a lot of metadata changes and you rotate your binary logs, you might end up with errands transaction in case of failover. So I recommend to let it on.

MySQL Router Extra Configuration

We still have to manually modify the MySQL Router. We bootstrapped it earlier. Now it’s time to change the configuration.

We will have to create two configurations for the router, 1 for all the application servers in DC1 and 1 for those in DC2.

/etc/mysqlrouter/mysqlrouter.conf in DC1 # File automatically generated during MySQL Router bootstrap [DEFAULT] name=system user=mysqlrouter keyring_path=/var/lib/mysqlrouter/keyring master_key_path=/etc/mysqlrouter/mysqlrouter.key connect_timeout=30 read_timeout=30 [logger] level = INFO [metadata_cache:clusterDC1] router_id=1 bootstrap_server_addresses=mysql://mysql1:3306,mysql://mysql2:3306,mysql://mysql3:3306 user=mysql_router1_8qm3jeohgy4e metadata_cluster=clusterDC1 ttl=0.5 [routing:clusterDC1_default_rw] bind_address=0.0.0.0 bind_port=6446 destinations=metadata-cache://clusterDC1/default?role=PRIMARY routing_strategy=round-robin protocol=classic [routing:clusterDC1_default_ro] bind_address=0.0.0.0 bind_port=6447 destinations=metadata-cache://clusterDC1/default?role=SECONDARY routing_strategy=round-robin protocol=classic [routing:clusterDC1_default_x_rw] bind_address=0.0.0.0 bind_port=64460 destinations=metadata-cache://clusterDC1/default?role=PRIMARY routing_strategy=round-robin protocol=x [routing:clusterDC1_default_x_ro] bind_address=0.0.0.0 bind_port=64470 destinations=metadata-cache://clusterDC1/default?role=SECONDARY routing_strategy=round-robin protocol=x /etc/mysqlrouter/mysqlrouter.conf in DC2 # File automatically generated during MySQL Router bootstrap [DEFAULT] name=system user=mysqlrouter keyring_path=/var/lib/mysqlrouter/keyring master_key_path=/etc/mysqlrouter/mysqlrouter.key connect_timeout=30 read_timeout=30 [logger] level = INFO [metadata_cache:clusterDC2] router_id=1 bootstrap_server_addresses=mysql://mysql4:3306,mysql://mysql5:3306,mysql://mysql6:3306 user=mysql_router1_8qm3jeohgy4e metadata_cluster=clusterDC2 ttl=0.5 [routing:clusterDC2_default_rw] bind_address=0.0.0.0 bind_port=6446 destinations=metadata-cache://clusterDC2/default?role=PRIMARY routing_strategy=round-robin protocol=classic [routing:clusterDC2_default_ro] bind_address=0.0.0.0 bind_port=6447 destinations=metadata-cache://clusterDC2/default?role=SECONDARY routing_strategy=round-robin protocol=classic [routing:clusterDC2_default_x_rw] bind_address=0.0.0.0 bind_port=64460 destinations=metadata-cache://clusterDC2/default?role=PRIMARY routing_strategy=round-robin protocol=x [routing:clusterDC2_default_x_ro] bind_address=0.0.0.0 bind_port=64470 destinations=metadata-cache://clusterDC2/default?role=SECONDARY routing_strategy=round-robin protocol=x

Of course the configuration file, the key (/etc/mysqlrouter/mysqlrouter.key) and the keyring file (/var/lib/mysqlrouter/keyring) must be copied on all routers and the ownership changed to mysqlrouter’s user (mysqlrouter).

MySQL Router for Replication

As explained in this post, we can also use the MySQL Router in case the Primary-Master acting as asynchronous master dies. If it’s the asynchronous slave that dies, you need to promote the new Primary-Master as asynchronous slave manually.

On a dedicated machine or on all nodes you need to setup the MySQL Router to point to the other DC. So in DC1, you need to have the same router configuration as the application servers in DC2. And the opposite in DC2.

For example, I used mysql1 as router for clusterDC2 and mysql4 as router for clusterDC1.

[root@mysql1 vagrant]# cp mysqlrouter.key /etc/mysqlrouter/ [root@mysql1 vagrant]# cp dc2_mysqlrouter.conf /etc/mysqlrouter/mysqlrouter.conf cp: overwrite ‘/etc/mysqlrouter/mysqlrouter.conf’? y [root@mysql1 vagrant]# mkdir /var/lib/mysqlrouter [root@mysql1 vagrant]# cp keyring /var/lib/mysqlrouter/ [root@mysql1 vagrant]# chown mysqlrouter -R /var/lib/mysqlrouter/ [root@mysql1 vagrant]# chown mysqlrouter /etc/mysqlrouter/mysqlrouter.* [root@mysql1 vagrant]# systemctl start mysqlrouter [root@mysql1 vagrant]# systemctl status mysqlrouter ● mysqlrouter.service - MySQL Router Loaded: loaded (/usr/lib/systemd/system/mysqlrouter.service; disabled; vendor preset: disabled) Active: active (running) since Thu 2018-11-22 12:32:50 UTC; 4s ago Main PID: 15403 (main) CGroup: /system.slice/mysqlrouter.service └─15403 /usr/bin/mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf Nov 22 12:32:50 mysql1 systemd[1]: Started MySQL Router. Nov 22 12:32:50 mysql1 systemd[1]: Starting MySQL Router...

And the same on mysql4 using the config of DC1.

We can test, we need to use port 6446 yo reach the Primary-Master of each cluster:

[root@mysql1 ~]# mysql -h mysql1 -P 6446 -u clusteradmin -p -e "select @@hostname" Enter password: +------------+ | @@hostname | +------------+ | mysql4 | +------------+ [root@mysql1 ~]# mysql -h mysql4 -P 6446 -u clusteradmin -p -e "select @@hostname" Enter password: +------------+ | @@hostname | +------------+ | mysql1 | +------------+

Perfect, now we just need to change the asynchronous replication.

Let’s start in mysql1:

mysql> STOP SLAVE; mysql> CHANGE MASTER TO MASTER_HOST='mysql1', MASTER_PORT=6446, MASTER_USER='repl', MASTER_PASSWORD='replication', MASTER_AUTO_POSITION=1, MASTER_SSL=1; mysql> START SLAVE; mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: mysql1 Master_User: repl Master_Port: 6446 Connect_Retry: 60 Master_Log_File: binlog.000006 Read_Master_Log_Pos: 37341 Relay_Log_File: mysql1-relay-bin.000002 Relay_Log_Pos: 391 Relay_Master_Log_File: binlog.000006 Slave_IO_Running: Yes Slave_SQL_Running: Yes ...

We need to perform the change in mysql4 that needs to use mysql4 as master using port 6446.

Conclusion

As you can see, it’s possible to use MySQL InnoDB Cluster as High Availability solution (HA) and extend it for Disaster Recovery (DR) using Asynchronous Replication.

Please keep in mind, that it is highly recommended to write only in one DC at the time.

Categories: Web Technologies

PHP 7.3.0RC6 Released - PHP: Hypertext Preprocessor

Planet PHP - Wed, 11/21/2018 - 16:00
The PHP team is glad to announce the presumably last PHP 7.3.0 pre-release, PHP 7.3.0RC6. The rough outline of the PHP 7.3 release cycle is specified in the PHP Wiki. For source downloads of PHP 7.3.0RC6 please visit the download page. Windows sources and binaries can be found on windows.php.net/qa/. Please carefully test this version and report any issues found in the bug reporting system. THIS IS A DEVELOPMENT PREVIEW - DO NOT USE IT IN PRODUCTION! For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. Internal changes are listed in the UPGRADING.INTERNALS file. These files can also be found in the release archive. The next release would be 7.3.0 (GA), planned for December 6th. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
Categories: Web Technologies

Percona Server for MySQL 5.7.23-25 Is Now Available

Planet MySQL - Wed, 11/21/2018 - 13:41

Percona announces the release of Percona Server for MySQL 5.7.23-25 on November 21, 2018 (downloads are available here and from the Percona Software Repositories). This release merges changes of MySQL 5.7.23, including all the bug fixes in it. Percona Server for MySQL 5.7.23-25 is now the current GA release in the 5.7 series. All of Percona’s software is open-source and free.

This release fixes a critical bug in a RocksDB submodule.

Bugs Fixed
  • #5049: Severe memory leak regression in the RocksDB Block Cache

Find the release notes for Percona Server for MySQL 5.7.23-25 in our online documentation. Report bugs in the Jira bug tracker.

Categories: Web Technologies

Percona Server for MySQL 5.5.62-38.14 Is Now Available

Planet MySQL - Wed, 11/21/2018 - 09:57

Percona announces the release of Percona Server for MySQL 5.5.62-38.14 on November 21, 2018 (downloads are available here and from the Percona Software Repositories). This release merges changes of MySQL 5.5.62, including all the bug fixes in it. Percona Server for MySQL 5.5.62-38.14 is now the current GA release in the 5.5 series. All of Percona’s software is open-source and free.

Note that Percona Server for MySQL 5.5.62-38.14 is the last release of the 5.5 series. This series goes EOL on December 1st, 2018.

Improvements
  • #4790: The accuracy of user statistics has been improved
Bugs Fixed
  • The binary log could be corrupted when the disk partition used for temporary files (tmpdir system variable) had little free space. Bug fixed #1107
  • PURGE CHANGED_PAGE_BITMAPS did not work when the innodb_data_home_dir system variable was used. Bug fixed #4723
Other Bugs Fixed
  • #4773: Percona Server sources can’t be compiled without server.
  • #4781: sql_yacc.yy uses SQLCOM_SELECT instead of SQLCOM_SHOW_XXXX_STATS

Find the release notes for Percona Server for MySQL 5.5.62-38.14 in our online documentation. Report bugs in the Jira bug tracker.

Categories: Web Technologies

Swipeable card stack using Vue.js and interact.js

CSS-Tricks - Wed, 11/21/2018 - 06:53

I recently had an opportunity to work on a fantastic research and development project at Netguru. The goal of project (codename: "Wordguru") was to create a card game that anyone can play with their friends. You can see the outcome here.

One element of the development process was to create an interactive card stack. The card stack had a set of requirements, including:

  • It should contain a few cards from the collection.
  • The first card should be interactive.
  • The user should be able to swipe the card in different directions that indicate an intent to accept, reject or skip the card.

This article will explain how to create that and make it interactive using Vue.js and interact.js. I created an example for you to refer to as we go through the process of creating a component that is in charge of displaying that card stack and a second component that is responsible for rendering a single card and managing user interactions in it.

View Demo

Step 1: Create the GameCard component in Vue

Let’s start by creating a component that will show a card, but without any interactions just yet. We’ll call this file GameCard.vue and, in the component template, we’ll render a card wrapper and the keyword for a specific card. This is the file we’ll be working in throughout this post.

// GameCard.vue <template> <div class="card" :class="{ isCurrent: isCurrent }" > <h3 class="cardTitle">{{ card.keyword }}</h3> </div> </template>

In the script section of the component, we receive the prop card that contains our card content as well as an isCurrent prop that gives the card a distinct look when needed.

export default { props: { card: { type: Object, required: true }, isCurrent: { type: Boolean, required: true } } }, Step 2: Create the GameCardStack component in Vue

Now that we have a single card, let's create our card stack.

This component will receive an array of cards and render the GameCard for each card. It's also going to mark the first card as the current card in the stack so a special styling is applied to it.

// GameCardsStack.vue <template> <div class="cards"> <GameCard v-for="(card, index) in cards" :key="card" :card="card" :is-current="index === 0" /> </div> </template> <script> import GameCard from "@/components/GameCard"; export default { components: { GameCard }, props: { cards: { type: Array, required: true } } }; </script>

Here’s what we’re looking at so far, using the styles pulled from the demo:

At this point, our card looks complete, but isn't very interactive. Let's fix that in the next step!

Step 3: Add interactivity to GameCard component

All our interactivity logic will live in the GameCard component. Let's start by allowing the user to drag the card. We will use interact.js to deal with dragging.

We’ll set the interactPosition initial values to 0 in the script section. These are the values that indicate a card’s order in the stack when it’s moved from its original position.

<script> import interact from "interact.js"; data() { return { interactPosition: { x: 0, y: 0 }, }; }, // ... </script>

Next, we create a computed property that’s responsible for creating a transform value that’s applied to our card element.

// ... computed: { transformString() { const { x, y } = this.interactPosition; return `translate3D(${x}px, ${y}px, 0)`; } }, // ...

In the mounted lifecycle hook, we make use of the interact.js and its draggable method. That method allows us to fire a custom function each time the element is dragged (onmove). It also exposes an event object that carries information about how far the element is dragged from its original position. Each time user drags the card, we calculate a new position of the card and set it on the interactPosition property. That triggers our transformString computed property and sets new value of transform on our card.

We use the interact onend hook that allows us to listen when the user releases the mouse and finishes the drag. At this point, we will reset the position of our card and bring it back to its original position: { x: 0, y: 0 }.

We also need to make sure to remove the card element from the Interactable object before it gets destroyed. We do that in the beforeDestroy lifecycle hook by using interact(target).unset(). That removes all event listeners and makes interact.js completely forget about the target.

// ... mounted() { const element = this.$refs.interactElement; interact(element).draggable({ onmove: event => { const x = this.interactPosition.x + event.dx; const y = this.interactPosition.y + event.dy; this.interactSetPosition({ x, y }); }, onend: () => { this.resetCardPosition(); } }); }, // ... beforeDestroy() { interact(this.$refs.interactElement).unset(); }, // ... methods: { interactSetPosition(coordinates) { const { x = 0, y = 0 } = coordinates; this.interactPosition = {x, y }; }, resetCardPosition() { this.interactSetPosition({ x: 0, y: 0 }); }, }, // ...

We need to add one thing in our template to make this work. As our transformString computed property returns a string, we need to apply it to the card component. We do that by binding to the :style attribute and then passing the string to the transform property.

<template> <div class="card" :class="{ isCurrent: isCurrent }" :style="{ transform: transformString }" > <h3 class="cardTitle">{{ card.keyword }}</h3> </div> </template>

With that done, we have created interaction with our card — we can drag it around!

You may have noticed that the behavior isn’t very natural, specifically when we drag the card and release it. The card immediately returns to its original position, but it would be more natural if the card would go back to initial position with animation to smooth the transition.

That’s where transition comes into play! But adding it to our card introduces another issue: there’s a lag in the card following as it follows the cursor because transition is applied to the element at all times. We only want it applied when the drag ends. We can do that by binding one more class (isAnimating) to the component.

<template> <div class="card" :class="{ isAnimating: isInteractAnimating, isCurrent: isCurrent }" > <h3 class="cardTitle">{{ card.keyword }}</h3> </div> </template>

We can add and remove the animation class by changing the isInteractAnimating property.

The animation effect should be applied initially and we do that by setting our property in data.

In the mounted hook where we initialize interact.js, we use one more interact hook (onstart) and change the value of isInteractAnimating to false so that the animation is disabled when the during the drag.

We’ll enable the animation again in the onend hook, and that will make our card animate smoothly to its original position when we release it from the drag.

We also need to update transformString computed property and add a guard to recalculate and return a string only when we are dragging the card.

data() { return { // ... isInteractAnimating: true, // ... }; }, computed: { transformString() { if (!this.isInteractAnimating) { const { x, y } = this.interactPosition; return `translate3D(${x}px, ${y}px, 0)`; } return null; } }, mounted() { const element = this.$refs.interactElement; interact(element).draggable({ onstart: () => { this.isInteractAnimating = false; }, // ... onend: () => { this.isInteractAnimating = true; }, }); },

Now things are starting to look nice!

Our card stack is ready for second set of interactions. We can drag the card around, but nothing is actually happening — the card is always coming back to its original place, but there is no way to get to the second card.

This will change when we add logic that allows the user to accept and rejecting cards.

Step 4: Detect when the card is accepted, rejected, or skipped

The card has three types of interactions:

  • Accept card (on swipe right)
  • Reject card (on swipe left)
  • Skip card (on swipe down)

We need to find a place where we can detect if the card was dragged from its initial position. We also want to be sure that this check will happen only when we finish dragging the card so the interactions do not conflict with the animation we just finished.

We used that place earlier smooth the transition during animation — it's the onend hook provided by the interact.draggable method.

Let's jump into the code.

First, we need to store our threshold values. Those values are the distances as the card is dragged from its original position and allows us to determine if the card should be accepted, rejected, or skipped. We use X axis for right (accept) and left (reject), then use the Y axis for downward movement (skip).

We also set coordinates where we want to place a card after it gets accepted, rejected or skipped (coordinates out of user's sight).

Since those values will not change, we will keep them in the static property of our component, which can be accessed with this.$options.static.interactYThreshold.

export default { static: { interactYThreshold: 150, interactXThreshold: 100 },

We need to check if any of our thresholds were met in our onend hook and then fire the appropriate method that happened. If no threshold is met, then we reset the card to its initial position.

mounted() { const element = this.$refs.interactElement; interact(element).draggable({ onstart: () => {...}, onmove: () => {...}, onend: () => { const { x, y } = this.interactPosition; const { interactXThreshold, interactYThreshold } = this.$options.static; this.isInteractAnimating = true; if (x > interactXThreshold) this.playCard(ACCEPT_CARD); else if (x < -interactXThreshold) this.playCard(REJECT_CARD); else if (y > interactYThreshold) this.playCard(SKIP_CARD); else this.resetCardPosition(); } }); }

OK, now we need to create a playCard method that’s responsible for handling those interactive actions.

Step 5: Establish the logic to accept, reject, and skip cards

We will create a method that accepts a parameter telling us the user’s intended action. Depending on that parameter, we will set the final position of the current card and emit the accept, reject, or skip event. Let's go step by step.

First, our playCard method will remove the card element from the Interactable object so that it stops tracking drag events. We do that by using interact(target).unset().
Secondly, we set the final position of the active card depending on the user's intention. That new position allows us to animate the card and remove it from the user's view.

Next, we emit an event up to the parent component so we can deal with our cards (e.g. change the current card, load more cards, shuffle the cards, etc.). We want to follow the DDAU principle that states a component should refrain from mutating data it doesn't own. Since our cards are passed down to our component, it should emit an event up to the place from where those cards come.

Lastly, we hide the card that was just played and add a timeout that allow the card to animate out of view.

methods: { playCard(interaction) { const { interactOutOfSightXCoordinate, interactOutOfSightYCoordinate, } = this.$options.static; this.interactUnsetElement(); switch (interaction) { case ACCEPT_CARD: this.interactSetPosition({ x: interactOutOfSightXCoordinate, }); this.$emit(ACCEPT_CARD); break; case REJECT_CARD: this.interactSetPosition({ x: -interactOutOfSightXCoordinate, }); this.$emit(REJECT_CARD); break; case SKIP_CARD: this.interactSetPosition({ y: interactOutOfSightYCoordinate }); this.$emit(SKIP_CARD); break; } this.hideCard(); }, hideCard() { setTimeout(() => { this.isShowing = false; this.$emit("hideCard", this.card); }, 300); }, interactUnsetElement() { interact(this.$refs.interactElement).unset(); this.interactDragged = true; }, }

And, there we go!

Summary

Let's recap what we just accomplished:

  • First we created a component for a single card.
  • Next we created another component that renders the cards in a stack.
  • Thirdly, we implemented interact.js to allow interactive dragging.
  • Then we detected when the user wants takes an action with the current card.
  • Finally, we established the to handle those actions.

Phew, we covered a lot! Hopefully this gives you both a new trick in your toolbox as well as a hands-on use case for Vue. And, if you’ve ever had to build something similar, please share in the comments because it would be neat to compare notes.

The post Swipeable card stack using Vue.js and interact.js appeared first on CSS-Tricks.

Categories: Web Technologies

Exclusions will hopefully be like more powerful grid-friendly floats

CSS-Tricks - Wed, 11/21/2018 - 06:49

Exclusions (which are currently in a "working draft" spec as I write) are kinda like float in that they allow inline content to wrap around an element. But not exactly a float. Chen Hui Jing has an excellent explanation:

An exclusion element is a block-level element which is not a float, and generates an exclusion box. An exclusion element establishes a new block formatting context.

An element becomes an exclusion when its wrap-flow property is computed to something other than its initial value of auto. When an element becomes an exclusion, inline content will wrap around the exclusion areas, but within their own formatting contexts.

Source: Chen's article

Support is limited to Edge and IE (again, as I write):

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafariNoNoNo10*12*NoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNoNoNoNoNoNo

Chen makes a great case for why they are useful, but another round of discussion has cropped up lately as well. Rob Weychert documents a simple layout situation in which an image is floated left and text is wrapping around it:

Source: Rob's article

As those light red bars indicate, Rob has set up some display: grid; columns to align elements in the article to those axes. A classic "editorial layout" indeed. But there really is no good mechanism to place that image onto the grid and maintain the wrapping. By placing both the content and the image into separate grid items, you don't get the wrapping. You can use float, but that's not using the grid.

Rachel Andrew chimed in that the answer is CSS exclusions. While Rob's example ultimately had to use floats, Rachel re-did it with exclusions. Exclusions make the code much simpler.

/* with floats, replicating exactly what the grid is doing */ img { float: left; width: calc( 3 * ((100% - 50px - 15em) / 6) + 50px + 2em ); } /* with exclusions, using the grid */ img { grid-row: 2; grid-column: 1 / 3; -ms-wrap-flow: both; }

Perhaps we can chime in with thumbs up on Rachel's call to see what's up with the status of the spec and with other author use cases.

The post Exclusions will hopefully be like more powerful grid-friendly floats appeared first on CSS-Tricks.

Categories: Web Technologies

How to Encrypt Your MySQL & MariaDB Backups

Planet MySQL - Wed, 11/21/2018 - 02:58

We usually take care of things we value, whether it is an expensive smartphone or the company’s servers. Data is one of the most important assets of the organisation, and although we do not see it, it has to be carefully protected. We implement data at rest encryption to encrypt database files or whole volumes which contain our data. We implement data in transit encryption using SSL to make sure no one can sniff and collect data sent across networks. Backups are no different. No matter if this is a full backup or incremental, it will store at least a part of your data. As such, backups have to be encrypted too. In this blog post, we will look into some options you may have when it comes to encrypting backups. First though, let’s look at how you can encrypt your backups and what could be use cases for those methods.

How to encrypt your backup? Encrypt local file

First of all, you can easily encrypt existing files. Let’s imagine that you have a backup process storing all your backups on a backup server. At some point you decided it’s the high time to implement offsite backup storage for disaster recovery. You can use S3 or similar infrastructure from other cloud providers for that. Of course, you don’t want to upload unencrypted backups anywhere outside of your trusted network, therefore it is critical that you implement backup encryption one way or the other. A very simple method, not requiring any changes in your existing backup scripts would be to create a script which will take a backup file, encrypt it and upload it to S3. One of the methods you can use to encrypt the data is to use openssl:

openssl enc -aes-256-cbc -salt -in backup_file.tar.gz -out backup_file.tar.gz.enc -k yoursecretpassword

This will create a new, encrypted file, ‘backup_file.tar.gz.enc’ in the current directory. You can always decrypt it later by running:

openssl aes-256-cbc -d -in backup_file.tar.gz.enc -out backup_file.tar.gz -k yoursecretpassword

This method is very simple, but it has some drawbacks. The biggest one is the disk space requirements. When encrypting like we described above, you have to keep both unencrypted and encrypted file and in the result you require a disk space twice the size of the backup file. Of course, depending on your requirements, this might be something you want - keeping non-encrypted files locally will improve recovery speed - after all decrypting the data will also take some time.

Encrypt backup on the fly

To avoid the need of storing both encrypted and unencrypted data, you may want to implement the encryption at the earlier stage of the backup process. We can do that through pipes. Pipes are, in short, a way of sending the data from one command to another. This makes it possible to create a chain of commands that processes data. You can generate the data, then compress it and encrypt. An example of such chain might be:

mysqldump --all-databases --single-transaction --triggers --routines | gzip | openssl enc -aes-256-cbc -k mypass > backup.xb.enc

You can also use this method with xtrabackup or mariabackup. In fact, this is the example from MariaDB documentation:

mariabackup --user=root --backup --stream=xbstream | openssl enc -aes-256-cbc -k mypass > backup.xb.enc

If you want, you can even try to upload data as the part of the process:

mysqldump --all-databases --single-transaction --triggers --routines | gzip | openssl enc -aes-256-cbc -k mysecretpassword | tee -a mysqldump.gz.enc | nc 10.0.0.102 9991

As a result, you will see a local file ‘mysqldump.gz.enc’ and copy of the data will be piped to a program which will do something about it. We used ‘nc’, which streamed data to another host on which following was executed:

nc -l 9991 > backup.gz.enc

In this example we used mysqldump and nc but you can use xtrabackup or mariabackup and some tool which will upload the stream to Amazon S3, Google Cloud Storage or some other cloud provider. Any program or script which accepts data on stdin can be used instead of ‘nc’.

Use embedded encryption

In some of the cases, a backup tool has embedded support for encryption. An example here is xtrabackup, which can internally encrypt the file. Unfortunately, mariabackup, even though it is a fork of xtrabackup, does not support this feature.

Before we can use it, we have to create a key which will be used to encrypt the data. It can be done by running the following command:

root@vagrant:~# openssl rand -base64 24 HnliYiaRo7NUvc1dbtBMvt4rt1Fhunjb

Xtrabackup can accept the key in plain text format (using --encrypt-key option) or it can read it from file (using --encrypt-key-file option). The latter is safer as passing passwords and keys as plain text to commands result in storing them in the bash history. You can also see it clearly on the list of running processes, which is quite bad:

root 1130 0.0 0.6 65508 4988 ? Ss 00:46 0:00 /usr/sbin/sshd -D root 13826 0.0 0.8 93100 6648 ? Ss 01:26 0:00 \_ sshd: root@notty root 25363 0.0 0.8 92796 6700 ? Ss 08:54 0:00 \_ sshd: vagrant [priv] vagrant 25393 0.0 0.6 93072 4936 ? S 08:54 0:01 | \_ sshd: vagrant@pts/1 vagrant 25394 0.0 0.4 21196 3488 pts/1 Ss 08:54 0:00 | \_ -bash root 25402 0.0 0.4 52700 3568 pts/1 S 08:54 0:00 | \_ sudo su - root 25403 0.0 0.4 52284 3264 pts/1 S 08:54 0:00 | \_ su - root 25404 0.0 0.4 21196 3536 pts/1 S 08:54 0:00 | \_ -su root 26686 6.0 4.0 570008 30980 pts/1 Sl+ 09:48 0:00 | \_ innobackupex --encrypt=AES256 --encrypt-key=TzIZ7g+WzLt0PXWf8WDPf/sjIt7UzCKw /backup/

Ideally, you will use the key stored in a file but then there’s a small gotcha you have to be aware of.

root@vagrant:~# openssl rand -base64 24 > encrypt.key root@vagrant:~# innobackupex --encrypt=AES256 --encrypt-key-file=/root/encrypt.key /backup/ . . . xtrabackup: using O_DIRECT InnoDB: Number of pools: 1 encryption: unable to set libgcrypt cipher key - User defined source 1 : Invalid key length encrypt: failed to create worker threads. Error: failed to initialize datasink.

You may wonder what the problem is. It’ll become clear when we will open encrypt.key file in a hexadecimal editor like hexedit:

00000000 6D 6B 2B 4B 66 69 55 4E 5A 49 48 77 39 42 36 72 68 70 39 79 6A 56 44 72 47 61 79 45 6F 75 6D 70 0A mk+KfiUNZIHw9B6rhp9yjVDrGayEoump.

You can now notice the last character encoded using ‘0A’. This is basically the line feed character, but it is taken under consideration while evaluating the encryption key. Once we remove it, we can finally run the backup.

root@vagrant:~# innobackupex --encrypt=AES256 --encrypt-key-file=/root/encrypt.key /backup/ xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --innodb_buffer_pool_size=185M --innodb_flush_log_at_trx_commit=2 --innodb_file_per_table=1 --innodb_data_file_path=ibdata1:100M:autoextend --innodb_read_io_threads=4 --innodb_write_io_threads=4 --innodb_doublewrite=1 --innodb_log_file_size=64M --innodb_log_buffer_size=16M --innodb_log_files_in_group=2 --innodb_flush_method=O_DIRECT --server-id=1 xtrabackup: recognized client arguments: --datadir=/var/lib/mysql --innodb_buffer_pool_size=185M --innodb_flush_log_at_trx_commit=2 --innodb_file_per_table=1 --innodb_data_file_path=ibdata1:100M:autoextend --innodb_read_io_threads=4 --innodb_write_io_threads=4 --innodb_doublewrite=1 --innodb_log_file_size=64M --innodb_log_buffer_size=16M --innodb_log_files_in_group=2 --innodb_flush_method=O_DIRECT --server-id=1 --databases-exclude=lost+found --ssl-mode=DISABLED encryption: using gcrypt 1.6.5 181116 10:11:25 innobackupex: Starting the backup operation IMPORTANT: Please check that the backup run completes successfully. At the end of a successful backup run innobackupex prints "completed OK!". 181116 10:11:25 version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;mysql_socket=/var/lib/mysql/mysql.sock' as 'backupuser' (using password: YES). 181116 10:11:25 version_check Connected to MySQL server 181116 10:11:25 version_check Executing a version check against the server... 181116 10:11:25 version_check Done. 181116 10:11:25 Connecting to MySQL server host: localhost, user: backupuser, password: set, port: not set, socket: /var/lib/mysql/mysql.sock Using server version 5.7.23-23-57 innobackupex version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c) xtrabackup: uses posix_fadvise(). xtrabackup: cd to /var/lib/mysql xtrabackup: open files limit requested 0, set to 1024 xtrabackup: using the following InnoDB configuration: xtrabackup: innodb_data_home_dir = . xtrabackup: innodb_data_file_path = ibdata1:100M:autoextend xtrabackup: innodb_log_group_home_dir = ./ xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 67108864 xtrabackup: using O_DIRECT InnoDB: Number of pools: 1 181116 10:11:25 >> log scanned up to (2597648) xtrabackup: Generating a list of tablespaces InnoDB: Allocated tablespace ID 19 for mysql/server_cost, old maximum was 0 181116 10:11:25 [01] Encrypting ./ibdata1 to /backup/2018-11-16_10-11-25/ibdata1.xbcrypt 181116 10:11:26 >> log scanned up to (2597648) 181116 10:11:27 >> log scanned up to (2597648) 181116 10:11:28 [01] ...done 181116 10:11:28 [01] Encrypting ./mysql/server_cost.ibd to /backup/2018-11-16_10-11-25/mysql/server_cost.ibd.xbcrypt 181116 10:11:28 [01] ...done 181116 10:11:28 [01] Encrypting ./mysql/help_category.ibd to /backup/2018-11-16_10-11-25/mysql/help_category.ibd.xbcrypt 181116 10:11:28 [01] ...done 181116 10:11:28 [01] Encrypting ./mysql/slave_master_info.ibd to /backup/2018-11-16_10-11-25/mysql/slave_master_info.ibd.xbcrypt 181116 10:11:28 [01] ...done

As a result we will end up with a backup directory full of encrypted files:

root@vagrant:~# ls -alh /backup/2018-11-16_10-11-25/ total 101M drwxr-x--- 5 root root 4.0K Nov 16 10:11 . drwxr-xr-x 16 root root 4.0K Nov 16 10:11 .. -rw-r----- 1 root root 580 Nov 16 10:11 backup-my.cnf.xbcrypt -rw-r----- 1 root root 515 Nov 16 10:11 ib_buffer_pool.xbcrypt -rw-r----- 1 root root 101M Nov 16 10:11 ibdata1.xbcrypt drwxr-x--- 2 root root 4.0K Nov 16 10:11 mysql drwxr-x--- 2 root root 12K Nov 16 10:11 performance_schema drwxr-x--- 2 root root 12K Nov 16 10:11 sys -rw-r----- 1 root root 113 Nov 16 10:11 xtrabackup_checkpoints -rw-r----- 1 root root 525 Nov 16 10:11 xtrabackup_info.xbcrypt -rw-r----- 1 root root 2.7K Nov 16 10:11 xtrabackup_logfile.xbcrypt

Xtrabackup has some other variables which can be used to tune encryption performance:

  • --encrypt-threads allows for parallelization of the encryption process
  • --encrypt-chunk-size defines a working buffer for encryption process.

Should you need to decrypt the files, you can use innobackupex with --decrypt option for that:

root@vagrant:~# innobackupex --decrypt=AES256 --encrypt-key-file=/root/encrypt.key /backup/2018-11-16_10-11-25/

As xtrabackup does not clean encrypted files, you may want to remove them using following one-liner:

for i in `find /backup/2018-11-16_10-11-25/ -iname "*\.xbcrypt"`; do rm $i ; done Backup encryption in ClusterControl

With ClusterControl encrypted backups are just one click away. All backup methods (mysqldump, xtrabackup or mariabackup) support encryption. You can both create a backup ad hoc or you can prepare a schedule for your backups.

In our example we picked a full xtrabackup, we will store it on the controller instance.

On the next page we enabled the encryption. As stated, ClusterControl will automatically create an encryption key for us. This is it, when you click at the “Create Backup” button a process will be started.

New backup is visible on the backup list. It is marked as encrypted (the lock icon).

We hope that this blog post gives you some insights into how to make sure your backups are properly encrypted.

Tags:  MySQL MariaDB backup encryption
Categories: Web Technologies

MySQL Multi Source Replication With GTID and Non-GTID Mode

Planet MySQL - Wed, 11/21/2018 - 02:30

During our recent migration we migrated a part of a app to different region with new db. Our previous setup was on MySQL 5.6.39 with single write master and multiple slave to read data.  
Current Setup:


But now Masters will be two one for old and second for new migrated service. We moved new master on MySQL 5.7 with GTID, MTS and Logical Clock based replication. So our use case was to read whole data (written on master1 + master2) from slaves.

So now question is What we have done to achieve this?

Below is New Setup:

Now Question is what slave 5 is doing and how we can generate GTID on an intermediate slave?
Ans: With the help of our community expert (Jean-François Gagné) we got to know that we can generate GTID on an intermediate slave using process described in blog:  https://goo.gl/aKBxLU
So we compiled MySQL from patched source code and rolled out GTID on it. It worked and started generating GTID's. Now we upgraded our existing slaves from MySQL 5.6 to 5.7 and configured new slave channel of Master 2 on it.
Conclusion: So in way we have configured Multisource replication where we are getting data from Non GTID master and GTID master both.
Categories: Web Technologies

Prettier & Beautify

CSS-Tricks - Tue, 11/20/2018 - 15:02

Aww, what a cute blog post title, right?

Prettier is an "opinionated code formatter." I highly suggest using it. They have a document going deeper into the reasons, but their three marketing bullet points say it best:

  • You press save and code is formatted
  • No need to discuss style in code review
  • Saves you time and energy

But Prettier doesn't do all languages. Notably HTML. It's great at JSX, and I've gotten really used to enjoying that. But then when I switch to a Rails .erb template or a WordPress site where I'm editing a .php file, or even a plain ol' .html file... no Prettier.

They have a development branch for it, but I haven't tried that yet. For now, I was compelled to try an HTML prettification tool that's been out in the wild and through the ringer. The VS Code plugin Beautify does it and has 6.5 million installs, so that seemed like a good bet.

By using this plugin, that means you can configure it, rather than use VS Code's internal version of it which is apparently non-configurable.

The problem is...

The first time I tried this, I found that it made my Prettier stop working. That was unacceptable to me as I want Prettier to be the top priority formatter. I'm not sure if it was running both of them but placing Prettier last, if it was running Beautify on its own instead, or exactly what, but I couldn't figure it out at the time.

The trick for me was to tell it to only care about certain file types!

In my settings.json:

{ ... "beautify.language": { "html": ["html", "php", "erb"], "css": [], "js": [] } }

Now I can have Prettier doing all the languages it does, and fall back to Beautify for the HTML stuff that Prettier doesn't do yet. So far so good.

The post Prettier & Beautify appeared first on CSS-Tricks.

Categories: Web Technologies

Pages