emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Persistence of autoinc fixed in MySQL 8.0

Planet MySQL - Mon, 10/08/2018 - 09:00

The release of MySQL 8.0 has brought a lot of bold implementations that touched on things that have been avoided before, such as added support for common table expressions and window functions. Another example is the change in how AUTO_INCREMENT (autoinc) sequences are persisted, and thus replicated.

This new implementation carries the fix for bug #73563 (Replace result in auto_increment value less or equal than max value in row-based), which we’ve only found about recently. The surprising part is that the use case we were analyzing is a somewhat common one; this must be affecting a good number of people out there.

Understanding the bug

The business logic of the use case is such the UNIQUE column found in a table whose id is managed by an AUTO_INCREMENT sequence needs to be updated, and this is done with a REPLACE operation:

“REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted.”

So, what happens in practice in this particular case is a DELETE followed by an INSERT of the target row.

We will explore this scenario here in the context of an oversimplified currency converter application that uses USD as base reference:

CREATE TABLE exchange_rate ( id INT PRIMARY KEY AUTO_INCREMENT, currency VARCHAR(3) UNIQUE, rate FLOAT(5,3) ) ENGINE=InnoDB;

Let’s add a trio of rows to this new table:

INSERT INTO exchange_rate (currency,rate) VALUES ('EUR',0.854), ('GBP',0.767), ('BRL',4.107);

which gives us the following initial set:

master (test) > select * from exchange_rate; +----+----------+-------+ | id | currency | rate  | +----+----------+-------+ |  1 | EUR      | 0.854 | |  2 | GBP      | 0.767 | |  3 | BRL      | 4.107 | +----+----------+-------+ 3 rows in set (0.00 sec)

Now we update the rate for Brazilian Reais using a REPLACE operation:

REPLACE INTO exchange_rate SET currency='BRL', rate=4.500;

With currency being a UNIQUE field the row is fully replaced:

master (test) > select * from exchange_rate; +----+----------+-------+ | id | currency | rate  | +----+----------+-------+ |  1 | EUR      | 0.854 | |  2 | GBP      | 0.767 | |  4 | BRL      | 4.500 | +----+----------+-------+ 3 rows in set (0.00 sec)

and thus the autoinc sequence is updated:

master (test) > SHOW CREATE TABLE exchange_rate\G *************************** 1. row ***************************      Table: exchange_rate Create Table: CREATE TABLE `exchange_rate` ( `id` int(11) NOT NULL AUTO_INCREMENT, `currency` varchar(3) DEFAULT NULL, `rate` float(5,3) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `currency` (`currency`) ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

The problem is that the autoinc sequence is not updated in the replica as well:

slave1 (test) > select * from exchange_rate;show create table exchange_rate\G +----+----------+-------+ | id | currency | rate  | +----+----------+-------+ |  1 | EUR      | 0.854 | |  2 | GBP      | 0.767 | |  4 | BRL      | 4.500 | +----+----------+-------+ 3 rows in set (0.00 sec) *************************** 1. row ***************************      Table: exchange_rate Create Table: CREATE TABLE `exchange_rate` ( `id` int(11) NOT NULL AUTO_INCREMENT, `currency` varchar(3) DEFAULT NULL, `rate` float(5,3) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `currency` (`currency`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

Now, the moment we promote that replica as master and start writing to this table we’ll hit a duplicate key error:

slave1 (test) > REPLACE INTO exchange_rate SET currency='BRL', rate=4.600; ERROR 1062 (23000): Duplicate entry '4' for key 'PRIMARY'

Note that:

a) the transaction fails and the row is not replaced, however the autoinc sequence is incremented:

slave1 (test) > SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE table_schema='test' AND table_name='exchange_rate'; +----------------+ | AUTO_INCREMENT | +----------------+ |              5 | +----------------+ 1 row in set (0.00 sec)

b) this problem only happens with row-based replication (binlog_format=ROW), where REPLACE in this case is logged as a row UPDATE:

# at 6129 #180829 18:29:55 server id 100  end_log_pos 5978 CRC32 0x88da50ba Update_rows: table id 117 flags: STMT_END_F ### UPDATE `test`.`exchange_rate` ### WHERE ###   @1=3 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=1 is_null=0 */ ###   @3=4.107                /* FLOAT meta=4 nullable=1 is_null=0 */ ### SET ###   @1=4 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=1 is_null=0 */ ###   @3=4.5                  /* FLOAT meta=4 nullable=1 is_null=0 */

With statement-based replication—or even mixed format—the REPLACE statement is replicated as is: it will trigger a DELETE+INSERT in the background on the replica and thus update the autoinc sequence in the same way it did on the master.

This example (tested with Percona Server versions 5.5.61, 5.6.36 and 5.7.22) helps illustrate the issue with autoinc sequences not being persisted as they should be with row-based replication. However, MySQL’s Worklog #6204 includes a couple of scarier scenarios involving the master itself, such as when the server crashes while a transaction is writing to a table similar to the one used in the example above. MySQL 8.0 remedies this bug.

Workarounds

There are a few possible workarounds to consider if this problem is impacting you and if neither upgrading to the 8 series nor resorting to statement-based or mixed replication format are viable options.

We’ll be discussing three of them here: one that resorts around the execution of checks before a failover (to detect and fix autoinc inconsistencies in replicas), another that requires a review of all REPLACE statements like the one from our example and adapt it as to include the id field, thus avoiding the bug, and finally one that requires changing the schema of affected tables in such a way that the target field is made the Primary Key of the table while id (autoinc) is converted into a UNIQUE key.

a) Detect and fix

The less intrusive of the workarounds we conceived for the problem at hand in terms of query and schema changes is to run a check for each of the tables that might be facing this issue in a replica before we promote it as master in a failover scenario:

slave1 (test) > SELECT ((SELECT MAX(id) FROM exchange_rate)>=(SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE table_schema='test' AND table_name='exchange_rate')) as `check`; +-------+ | check | +-------+ |     1 | +-------+ 1 row in set (0.00 sec)

If the table does not pass the test, like ours didn’t at first (just before we attempted a REPLACE after we failed over to the replica), then update autoinc accordingly. The full routine (check + update of autoinc) could be made into a single stored procedure:

DELIMITER // CREATE PROCEDURE CheckAndFixAutoinc() BEGIN  DECLARE done TINYINT UNSIGNED DEFAULT 0;  DECLARE tableschema VARCHAR(64);  DECLARE tablename VARCHAR(64);  DECLARE columnname VARCHAR(64);    DECLARE cursor1 CURSOR FOR SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME FROM information_schema.COLUMNS WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys') AND EXTRA LIKE '%auto_increment%';  DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1;  OPEN cursor1;    start_loop: LOOP   IF done THEN     LEAVE start_loop;   END IF;   FETCH cursor1 INTO tableschema, tablename, columnname;   SET @get_autoinc = CONCAT('SELECT @check1 := ((SELECT MAX(', columnname, ') FROM ', tableschema, '.', tablename, ')>=(SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=\'', tableschema, '\' AND TABLE_NAME=\'', tablename, '\')) as `check`');   PREPARE stm FROM @get_autoinc;   EXECUTE stm;   DEALLOCATE PREPARE stm;   IF @check1>0 THEN     BEGIN       SET @select_max_id = CONCAT('SELECT @max_id := MAX(', columnname, ')+1 FROM ', tableschema, '.', tablename);       PREPARE select_max_id FROM @select_max_id;       EXECUTE select_max_id;       DEALLOCATE PREPARE select_max_id;       SET @update_autoinc = CONCAT('ALTER TABLE ', tableschema, '.', tablename, ' AUTO_INCREMENT=', @max_id);       PREPARE update_autoinc FROM @update_autoinc;       EXECUTE update_autoinc;       DEALLOCATE PREPARE update_autoinc;     END;   END IF;  END LOOP start_loop;    CLOSE cursor1; END// DELIMITER ;

It doesn’t allow for as clean a failover as we would like but it can be helpful if you’re stuck with MySQL<8.0 and binlog_format=ROW and cannot make changes to your queries or schema.

b) Include Primary Key in REPLACE statements

If we had explicitly included the id (Primary Key) in the REPLACE operation from our example it would have also been replicated as a DELETE+INSERT even when binlog_format=ROW:

master (test) > REPLACE INTO exchange_rate SET currency='BRL', rate=4.500, id=3; # at 16151 #180905 13:32:17 server id 100  end_log_pos 15986 CRC32 0x1d819ae9  Write_rows: table id 117 flags: STMT_END_F ### DELETE FROM `test`.`exchange_rate` ### WHERE ###   @1=3 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=1 is_null=0 */ ###   @3=4.107                /* FLOAT meta=4 nullable=1 is_null=0 */ ### INSERT INTO `test`.`exchange_rate` ### SET ###   @1=3 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=1 is_null=0 */ ###   @3=4.5                  /* FLOAT meta=4 nullable=1 is_null=0 */ # at 16199 #180905 13:32:17 server id 100  end_log_pos 16017 CRC32 0xf11fed56  Xid = 184 COMMIT/*!*/;

We could point out that we are doing it wrong by not having the id included in the REPLACE statement in the first place; the reason for not doing so would be mostly related to avoiding an extra lookup for each replace (to obtain the id for the currency we want to update). On the other hand, what if your business logic do expects the id to change at each REPLACE ? You should have such requirement in mind when considering this workaround as it is effectively a functional change to what we had initially.

c) Make the target field the Primary Key and keep autoinc as a UNIQUE key

If we make currency the Primary Key of our table and id a UNIQUE key instead:

CREATE TABLE exchange_rate ( id INT UNIQUE AUTO_INCREMENT, currency VARCHAR(3) PRIMARY KEY, rate FLOAT(5,3) ) ENGINE=InnoDB;

the same REPLACE operation will be replicated as a DELETE+INSERT too:

# at 19390 #180905 14:03:56 server id 100  end_log_pos 19225 CRC32 0x7042dcd5  Write_rows: table id 131 flags: STMT_END_F ### DELETE FROM `test`.`exchange_rate` ### WHERE ###   @1=3 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=0 is_null=0 */ ###   @3=4.107                /* FLOAT meta=4 nullable=1 is_null=0 */ ### INSERT INTO `test`.`exchange_rate` ### SET ###   @1=4 /* INT meta=0 nullable=0 is_null=0 */ ###   @2='BRL' /* VARSTRING(3) meta=3 nullable=0 is_null=0 */ ###   @3=4.5                  /* FLOAT meta=4 nullable=1 is_null=0 */ # at 19438 #180905 14:03:56 server id 100  end_log_pos 19256 CRC32 0x79efc619  Xid = 218 COMMIT/*!*/;

Of course, the same would be true if we had just removed id entirely from the table and kept currency as the Primary Key. This would work in our particular test example but that won’t always be the case. Please note though that if you do keep id on the table you must make it a UNIQUE key: this workaround is based on the fact that this key becomes a second unique constraint, which triggers a different code path to log a replace operation. Had we made it a simple, non-unique key instead that wouldn’t be the case.

If you have any comments or suggestions about the issue addressed in this post, the workarounds we propose, or even a different view of the problem you would like to share please leave a comment in the section below.

Co-Author: Trey Raymond

Trey Raymond is a Sr. Database Engineer for Oath Inc. (née Yahoo!), specializing in MySQL. Since 2010, he has worked to build the company’s database platform and supporting team into industry leaders.

While a performance guru at heart, his experience and responsibilities range from hardware and capacity planning all through the stack to database tool and utility development.

He has a reputation for breaking things to learn something new.

Co-Author: Fernando Laudares

Fernando is a Senior Support Engineer with Percona. Fernando’s work experience includes the architecture, deployment and maintenance of IT infrastructures based on Linux, open source software and a layer of server virtualization. He’s now focusing on the universe of MySQL, MongoDB and PostgreSQL with a particular interest in understanding the intricacies of database systems, and contributes regularly to this blog. You can read his other articles here.

Categories: Web Technologies

Using Recompose to Share Functionality Between React Components

CSS-Tricks - Mon, 10/08/2018 - 06:46

Sharing functionality between React components is a pretty common need. The concept is that we can establish the behavior in one place and then extend it across different components. Higher-Order Components are one way to do this. Yet, there is another way using a library called Recompose.

GitHub Repo

What is Recompose?

The documentation helps us answer that:

Recompose is a React utility belt for function components and higher-order components. Think of it like lodash for React.

Basically, it’s a library for React that contains a bunch of helpers that return different higher-order components — which is nice because it takes some of the grunt work out of defining common React patterns and making them immediately available to extend to other components.

What exactly can it do? Well, let’s walk through a few examples together.

Add state to functional stateless components

If you couldn’t guess it by the name, a functional stateless component does not have any states. It merely accepts props and returns UI to the front end.

const Greeting = props => <p> Hello, {props.name}! </p>

In scenarios where you want to make use of state in your functional stateless component, you have to convert it to a class component. This is where Recompose comes in handy.

Recompose provides you with the withState() helper to add state to your functional stateless components. It manages a single state value. You cannot manage more than one state value in withState() like you will do in your class component. You pass in a function to update the state value and an optional default stated value.

Here’s how withState() is structured:

withState( stateName: string, // the name we call our state stateUpdaterName: string, // the name of the function to call initialState: any | (props: Object) => any // optional default state to pass ): HigherOrderComponent

A counter is a common example that’s used to demonstrate a concept. Here’s how we can create a super simple counter using Recompose’s withState() helper:

const App = withState("count", "handleCounter", 0)(({ count, handleCounter }) => { return ( <div> <p>{count}</p> <button onClick={() => handleCounter(n => n + 1)}>Increment</button> <button onClick={() => handleCounter(n => n - 1)}>Decrement</button> </div> ); });

Since the withState() helper is already available to us, we can call it right away and provide it with the parameters it needs. Again, those are:

  • stateName: The name we call our state
  • stateUpdaterName: The name of the function to call
  • initialState: Optional default state to pass

Those parameters are then integrated into the UI markup we want to render on the front end.

There’s another way we could have made our counter component and it’s worth looking at to get more practice putting a Recompose helper to use.

First, we create a higher-order component using withState() and the required parameters.

const enhanced = withState("counter", "handleCounter", 0);

Next, we make the Counter component, working in the withState() parameters:

const Counter = ({ counter, handleCounter }) => ( <div> <h1>{counter}</h1> <button onClick={() => handleCounter(n => n + 1)}>Increment</button> <button onClick={() => handleCounter(n => n - 1)}>Decrement</button> </div> );

Note that the name of the state and updater function is passed as props to the Counter component.

Finally, we create our App component by wrapping the Counter component in the higher-order enhanced component.

const App = enhanced(Counter);

See the Pen Recompose withState by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Here is another popular approach:

const enhanced = withState("count", "handleCounter", 0); const App = enhanced(({ count, handleCounter }) => { return ( <div> <p>{count}</p> <button onClick={() => handleCounter(n => n + 1)}>Increment</button> <button onClick={() => handleCounter(n => n - 1)}>Decrement</button> </div> ); });

See the Pen Recompose withState v2 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Handle state using withHandlers()

Recompose also has a withHandlers() helper that allows you handle state by defining functions that will be used to update a component’s state. And, you can use it right alongside withState()!

These are basically higher-order functions that take in props and return a function handler. Let’s break down the structure like we did in the previous example.

withHandlers({ incrementCounter: props => event => { event.preventDefault(); props.handleCounter(props.count + 1); } })

First off, we’ve identified incrementCounter, which will be available to our Counter component to update the count value on click.

Next, we construct the counter like we did before — as a higher-order component using withState():

const enhancedState = withState("count", "handleCounter", 0);

Now, we define our functions putting withHandlers() to use:

const enhancedHandler = withHandlers({ incrementCounter: props => event => { event.preventDefault(); props.handleCounter(props.count + 1); }, decrementCounter: props => event => { event.preventDefault(); props.handleCounter(props.count - 1); } });

We’ve constructed a higher-order component we’re calling enhancedHandler and used withState() to define two function handlers in it: incrementCounter and decrementCounter. These handlers contain parameters that will be received as props by the components that call them. They will be needed if we want to update the state of the component defined using withState().

Alright, on to creating our counter component:

const Counter = ({ count, incrementCounter, decrementCounter }) => ( <div> <h1>{count}</h1> <button onClick={incrementCounter}>Increment</button> <button onClick={decrementCounter}>Decrement</button> </div> );

See that? The state and handlers are expected to be passed as props to the counter component.

To make use of the higher-order components we defined, we have to pass the Counter component to enhancedHandler and wrap that as a parameter to enhancedState.

In other words:

const App = enhancedState(enhancedHandler(Counter));

See the Pen Recompose withState & withHandlers by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Composing multiple higher-order components

In that last example, we made use of two higher-order components. Is there a better of chaining them together? Most definitely! Recompose provides us with a compose() helper to do exactly that. We can use compose() to create a component that composes both higher-order components in one fell swoop.

const enhanced = compose( withState("count", "handleCounter", 0), withHandlers({ incrementCounter: props => event => { event.preventDefault(); props.handleCounter(props.count + 1); }, decrementCounter: props => event => { event.preventDefault(); props.handleCounter(props.count - 1); } }) );

Now, we can use the enhanced component in our App component:

const App = enhanced(({ count, incrementCounter, decrementCounter }) => { return ( <div> <p>{count}</p> <button onClick={incrementCounter}>Increment</button> <button onClick={decrementCounter}>Decrement</button> </div> ); });

See the Pen Recompose - compose withState & withHandlers by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Manage state using Redux like reducer

Another nice thing Recompose does is allow you to manage state using a reducer function (withReducer). A reducer updates the state of a component in response to a particular action.

The structure of the withReducer() looks like this:

withReducer<S, A>( stateName: string, dispatchName: string, reducer: (state: S, action: A) => S, initialState: S | (ownerProps: Object) => S ): HigherOrderComponent</S>

The first parameter is the name of the state. The second is the dispatch method. dispatch will be used in dispatching actions like we have in Redux. Next, we have the reducer and the initial state.

In the context of our counter component, withReducer() will update the state of the count: the count will go up with an action we’ll call increment and, conversely, the count will go down with an action we’ll call decrement.

First, we create an enhanced component by composing withReducer and withHandlers.

const enhanced = compose( withReducer( "count", "dispatch", (state, action) => { switch (action.type) { case "INCREMENT": return state + 1; case "DECREMENT": return state - 1; default: return state; } }, 0 ), withHandlers({ incrementCounter: ({ dispatch }) => e => dispatch({ type: "INCREMENT" }), decrementCounter: ({ dispatch }) => e => dispatch({ type: "DECREMENT" }) }) );

incrementCounter and decrementCounter will respond to DOM events and dispatch an action type. The state will be updated depending on the action type. Next, we need to make use of this in our component.

const App = enhanced(({ count, incrementCounter, decrementCounter }) => { return ( <div> <p>{count}</p> <button onClick={incrementCounter}>Increment</button> <button onClick={decrementCounter}>Decrement</button> </div> ); });

See the Pen Recompose - reducer by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Pretty neat, right?

Hopefully this gives you a good idea of what Recompose is and how the library’s wealth of helpers can streamline React development, particularly when it comes to managing and calling states.

Naturally, there’s a lot more to Recompose than what we’ve covered here. A quick scan of the API documentation will show that there are many higher-order components that you can put to use, depending on your application’s requirements. Go forth, build, and please feel free to drop a comment in if you have any questions!

The post Using Recompose to Share Functionality Between React Components appeared first on CSS-Tricks.

Categories: Web Technologies

Vuesax 3.5.0 released

Echo JS - Mon, 10/08/2018 - 06:03
Categories: Web Technologies

World outside of console.log

Echo JS - Sat, 10/06/2018 - 11:42
Categories: Web Technologies

Reduce MySQL Memory Utilization With ProxySQL Multiplexing

Planet MySQL - Sat, 10/06/2018 - 09:28
MySQL Adventures: Reduce MySQL Memory Utilization With ProxySQL Multiplexing

In our previous post, we explained about how max_prepared_statement_count can bring production down . This blog is the continuity of that post. If you can read that blog from the below link.

How max_prepared_stmt_count bring down the production MySQL system

We had set the max_prepared_stmt_count to 20000. But after that, we were facing the below error continuously.

Can't create more than max_prepared_stmt_count statements (current value: 20000)

We tried to increase it to 25000, 30000 and finally 50000. But unfortunately, we can’t fix it and increasing this value will lead to a memory leak which we explained in our previous blog.

We are using ProxySQL to access the database servers. And the architecture looks like below.

Multiplexing in ProxySQL: The main purpose of the multiplexing is to reduce the connections to MySQL servers. So we can send thousands of connections to only a hundred backend connections.

We enabled multiplexing while setting up the Database environment. But multiplexing will not be work in all the times. ProxySQL has some sense to track the transactions which are executing in that connection. If any transactions are not committed or rollback then, it’ll never use that connection for the next request. It’ll pick another free connection from the connection pool.

From the ProxySQL Doc, there are few scenarios where multiplexing is disabled.

  • active transaction
  • Tables are locked.
  • Set queries (like SET FOREIGN_KEY_CHECKS)

In our case, the most of the errors are due to prepare statement count. Believe it, this issue made us to reduce the memory utilization also.

Get the currently active prepared statements:

Run the below query which will give tell us the number of active prepare statements in the backend.

SELECT
*
FROM
stats_mysql_global
WHERE
variable_name LIKE '%stmt%'; +---------------------------+----------------+
| Variable_Name | Variable_Value |
+---------------------------+----------------+
| Com_backend_stmt_prepare | 168318911 |
| Com_backend_stmt_execute | 143165882 |
| Com_backend_stmt_close | 0 |
| Com_frontend_stmt_prepare | 171609010 |
| Com_frontend_stmt_execute | 171234713 |
| Com_frontend_stmt_close | 171234006 |
| Stmt_Client_Active_Total | 19983 |
| Stmt_Client_Active_Unique | 4 |
| Stmt_Server_Active_Total | 84 |
| Stmt_Server_Active_Unique | 23 |
| Stmt_Max_Stmt_id | 10002 |
| Stmt_Cached | 2186 |
+---------------------------+----------------+
# You can get the same results in MySQL also MySQL> show status like 'Prepared_stmt_count';
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| Prepared_stmt_count | 19983 |
+---------------------+-------+

You can see the number of active prepare statements are 19983. while running the query again and again, I could see a random count but those all are more than 19000. And you can see the Com_backend_stmt_close is 0.

Yes, ProxySQL will never close the prepared statements in the backend. But there is a mechanism in ProxySQL which allocates 20 prepared statements(20 is the default value) to each connection. Once its executed all 20 statements then the connection will come back to the connection pool and close all 20 statements in one shot.

Run the below query to get the default statement count for a connection.

proxysql> show variables like "%stmt%"; +--------------------------------+-------+
| Variable_name | Value |
+--------------------------------+-------+
| mysql-max_stmts_per_connection | 20 |
| mysql-max_stmts_cache | 10000 |
+--------------------------------+-------+

There is a great explanation about this variable by René Cannaò who is the founder of ProxtSQL. You can read about that here.

Why this much prepared statements are running?

As mentioned earlier, proxysql will never close the prepared statements in the backend. We realize that we are getting heavy traffic on both ProxySQL servers and its send it to one Master node. And also the Laravel has all the queries with prepared statement format. That's why we are getting this much prepared statements.

Get where the most of the prepared statements are used:

Run the below query in proxySQL and this will give you the total count of executed prepared statements on all the databases and the usernames.

SELECT
username, schemaname, count(*)
FROM
stats_mysql_prepared_statements_info
GROUP BY
1, 2
ORDER BY
3 DESC; +----------+---------------+----------+
| username | schemaname | count(*) |
+----------+---------------+----------+
| DBname | user1 | 2850 |
| DBname | user2 | 257 |
| DBname | user3 | 108 |
| DBname | user1 | 33 |
| DBname | user2 | 33 |
| DBname | user1 | 16 |
| DBname | user1 | 15 |
+----------+---------------+----------+ #There is a Bug in this view. The Username column is actually showing the schemaname and the schemaname column is showing usernames. I have reported this bug in proxySQL's github repo. https://github.com/sysown/proxysql/issues/1720 Force ProxySQL to use multiplex:

There are few cases proxysql will disable the multiplexing. Particularly all queries that have @ in their query_digest will disable multiplexing.

Collect the queries which has @ SELECT
DISTINCT digest, digest_text
FROM
stats_mysql_query_digest
WHERE
digest_text LIKE '%@%' \G;
*************************** 1. row ***************************
digest: 0xA8F2FFB14312850C
digest_text: SELECT @@max_allowed_packet *************************** 2. row ***************************
digest: 0x7CDEEF2FF695B7F8
digest_text: SELECT @@session.tx_isolation *************************** 3. row ***************************
digest: 0x2B838C3B5DE79958
digest_text: SELECT @@session.auto_increment_increment AS auto_increment_increment, @@character_set_client AS character_set_client, @@character_set_connection AS character_set_connection, @@character_set_results AS character_set_results, @@character_set_server AS character_set_server, @@init_connect AS init_connect, @@interactive_timeout AS interactive_timeout, @@license AS license, @@lower_case_table_names AS lower_case_table_names, @@max_allowed_packet AS max_allowed_packet, @@net_buffer_length AS net_buffer_length, @@net_write_timeout AS net_write_timeout, @@query_cache_size AS query_cache_size, @@query_cache_type AS query_cache_type, @@sql_mode AS sql_mode, @@system_time_zone AS system_time_zone, @@time_zone AS time_zone, @@tx_isolation AS tx_isolation, @@wait_timeout AS wait_timeout

Finally, the above statements are executed by prepared statements. These queries are by default disabled the multiplexing. But ProxySQL has another cool feature that we can allow these queries (which has @ ) to use multiplexing.

Run the below query to set proxysql to use multiplexing for these queries. We can insert it by Query pattern or query digest.

# Add multiplexing to a query using query_text INSERT INTO mysql_query_rules
(active, match_digest, multiplex)
VALUES
('1', '^SELECT @@session.tx_isolation', 2);

INSERT INTO mysql_query_rules
(active, match_digest, multiplex)
VALUES
('1', '^SELECT @@max_allowed_packet', 2); # Add multiplexing to a query using query Diagest INSERT INTO mysql_query_rules
(active, digest, multiplex)
VALUES
('1', '0x2B838C3B5DE79958', 2); # Save it to Runtime and Disk LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK; Restart ProxySQL:

If we enabled multiplexing in proxySQL then its mandatory to restart ProxySQL service or close all the existing connections and open it as a new one.

service proxysql restart

Lets check the active prepared statements in both ProxySQL and MySQL.

#proxySQL SELECT
*
FROM
stats_mysql_global
WHERE
variable_name LIKE '%stmt%'; +--------------------------+----------------+
| Variable_Name | Variable_Value |
+--------------------------+----------------+
| Stmt_Client_Active_Total | 6 |
+--------------------------+----------------+ #mysql
show status like 'Prepared_stmt_count';
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| Prepared_stmt_count | 166 |
+---------------------+-------+

The number of Prepared_stmt_count is dramatically reduced from 20000 to 200. But why there is a different count between proxySQL and mysql?

Again refer to the ProxySQL’s doc. Once whenever a connection executed 20 statements, then only it’ll Close that prepared statement. In ProxySQL, its showing active statements. But MySQL is showing active + executed statements count.

Number of Backend connections:

After this change, there is a sudden drop in a number of backend connections in the ProxySQL. This proves that the statements which disable the multiplexing will create more backend connections.

MySQL’s Memory:

Now we can see the explanation for this blog title. See the below graph which is showing the high memory drop.

The two main parts where mysql used more memory:

  1. mysql connections.
  2. Prepared statements.

We all already knows that each mysql connection requires some amount of memory. And for prepared statements, I have explained about its memory consumption in my previous blog.

Conclusion:

ProxySQL has a lot of great features. Multiplexing is good. But after this incident only we realize that multiplexing will help to reduce the number of backend connections and MySQL’s memory utilization as well.

I hope if you are a DBA you will read this and implement it in your environment as well. If it helped for you then feel free to give your claps.

Reduce MySQL Memory Utilization With ProxySQL Multiplexing was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Web Technologies

How max_prepared_stmt_count bring down the production MySQL system

Planet MySQL - Sat, 10/06/2018 - 09:22
MySQL Adventures: How max_prepared_stmt_count can bring down production

We recently moved an On-Prem environment to GCP for better scalability and availability. The customer’s main database is MySQL. Due to the nature of customer’s business, it’s a highly transactional workload (one of the hot startups in APAC). To deal with the scale and meet availability requirements, we have deployed MySQL behind ProxySQL — which takes care of routing some of the resource intensive SELECTs to chosen replicas. The setup consists of:

  • One Master
  • Two slaves
  • One Archive database server

Post migration to GCP, everything was nice and calm for a couple of weeks, until MySQL decided to start misbehaving and leading to an outage. We were able to quickly resolve and bring the system back online and what follows are lessons from this experience.

The configuration of the Database:
  • CentOS 7.
  • MySQL 5.6
  • 32 Core CPU
  • 120GB Memory
  • 1 TB SSD for MySQL data volume.
  • The total database size is 40GB. (yeah, it is small in size, but highly transactional)
  • my.cnf is configured using Percona’s configuration wizard.
  • All tables are InnoDB Engine
  • No SWAP partitions.
The Problem

It all started with an alert that said MySQL process was killed by Linux’s OOM Killer. Apparently MySQL was rapidly consuming all the memory (about 120G) and OOM killer perceived it as a threat to the stability of the system and killed the process. We were perplexed and started investigating.

Sep 11 06:56:39 mysql-master-node kernel: Out of memory: Kill process 4234 (mysqld) score 980 or sacrifice child Sep 11 06:56:39 mysql-master-node kernel: Killed process 4234 (mysqld) total-vm:199923400kB, anon-rss:120910528kB, file-rss:0kB, shmem-rss:0kB Sep 11 06:57:00 mysql-master-node mysqld: /usr/bin/mysqld_safe: line 183: 4234 Killed nohup /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock < /dev/null > /dev/null 2>&1

Naturally, we started looking at mysql configuration to see if something is off.

InnoDB Parameters: innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 512M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 100G Other Caching Parameters: tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 4096
table-open-cache = 50

We are not really using query cache and one of the heavy front end service is PHP Laravel.

Here is the memory utilization graph.

The three highlighted areas are the points at which we had issues in production. The second issue happened very shortly, so we reduced the innodb-buffer-pool-size to 90GB. But even though the memory utilization never came down. So we scheduled a cronjob to flush OS Cache at least to give some addition memory to the Operating system by using the following command. This was a temporary measure till we found the actual problem.

sync; echo 3 > /proc/sys/vm/drop_cache

But This didn’t help really. The memory was still growing and we had to look at what’s really inside the OS Cache?

Fincore:

There is a tool called fincore helped me find out what’s actually the OS cache held. Its actually using Perl modules. use the below commands to install this.

yum install perl-Inline rpm -ivh http://fr2.rpmfind.net/linux/dag/redhat/el6/en/x86_64/dag/RPMS/fincore-1.9-1.el6.rf.x86_64.rpm

It never directly shows what files are inside the buffer/cache. We instead have to manually give the path and it’ll check what files are in the cache for that location. I wanted to check about Cached files for the mysql data directory.

cd /mysql-data-directory fincore -summary * > /tmp/cache_results

Here is the sample output of the cached files results.

page size: 4096 bytes
auto.cnf: 1 incore page: 0
dbadmin: no incore pages.
Eztaxi: no incore pages.
ibdata1: no incore pages.
ib_logfile0: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......
ib_logfile1: 131072 incore pages: 0 1 2 3 4 5 6 7 8 9 10......
mysql: no incore pages.
mysql-bin.000599: 8 incore pages: 0 1 2 3 4 5 6 7
mysql-bin.000600: no incore pages.
mysql-bin.000601: no incore pages.
mysql-bin.000602: no incore pages.
mysql-bin.000858: 232336 incore pages: 0 1 2 3 4 5 6 7 8 9 10......
mysqld-relay-bin.000001: no incore pages.
mysqld-relay-bin.index: no incore pages.
mysql-error.log: 4 incore pages: 0 1 2 3
mysql-general.log: no incore pages.
mysql.pid: no incore pages.
mysql-slow.log: no incore pages.
mysql.sock: no incore pages.
ON: no incore pages.
performance_schema: no incore pages.
mysql-production.pid: 1 incore page: 0 6621994 pages, 25.3 Gbytes in core for 305 files; 21711.46 pages, 4.8 Mbytes per file. The highlighted points show the graph when OS Cache is cleared.How we investigated this issue:

The first document that everyone refers is How mysql uses the memory from MySQL’s documentation. So we started with where are all the places that mysql needs memory. I’ll explain this about in a different blog. Lets continue with the steps which we did.

Make sure MySQL is the culprit:

Run the below command and this will give you the exact memory consumption about MySQL.

ps --no-headers -o "rss,cmd" -C mysqld | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"M") }' 119808M Additional Tips:

If you want to know each mysql’s threads memory utilization, run the below command.

# Get the PID of MySQL:
ps aux | grep mysqld mysql 4378 41.1 76.7 56670968 47314448 ? Sl Sep12 6955:40 /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock # Get all threads memory usage:
pmap -x 4378 4378: /usr/sbin/mysqld --basedir=/usr --datadir=/mysqldata --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/mysqldata/mysql.sock
Address Kbytes RSS Dirty Mode Mapping
0000000000400000 11828 4712 0 r-x-- mysqld
000000000118d000 1720 760 476 rw--- mysqld
000000000133b000 336 312 312 rw--- [ anon ]
0000000002b62000 1282388 1282384 1282384 rw--- [ anon ]
00007fd4b4000000 47816 37348 37348 rw--- [ anon ]
00007fd4b6eb2000 17720 0 0 ----- [ anon ]
00007fd4bc000000 48612 35364 35364 rw--- [ anon ]
.........
.........
.........
00007fe1f0075000 2044 0 0 ----- libpthread-2.17.so
00007fe1f0274000 4 4 4 r---- libpthread-2.17.so
00007fe1f0275000 4 4 4 rw--- libpthread-2.17.so
00007fe1f0276000 16 4 4 rw--- [ anon ]
00007fe1f027a000 136 120 0 r-x-- ld-2.17.so
00007fe1f029c000 2012 2008 2008 rw--- [ anon ]
00007fe1f0493000 12 4 0 rw-s- [aio] (deleted)
00007fe1f0496000 12 4 0 rw-s- [aio] (deleted)
00007fe1f0499000 4 0 0 rw-s- [aio] (deleted)
00007fe1f049a000 4 4 4 rw--- [ anon ]
00007fe1f049b000 4 4 4 r---- ld-2.17.so
00007fe1f049c000 4 4 4 rw--- ld-2.17.so
00007fe1f049d000 4 4 4 rw--- [ anon ]
00007ffecc0f1000 132 72 72 rw--- [ stack ]
00007ffecc163000 8 4 0 r-x-- [ anon ]
ffffffffff600000 4 0 0 r-x-- [ anon ]
---------------- ------- ------- -------
total kB 122683392 47326976 47320388 InnoDB Buffer Pool:

Initially we suspected the InnoDB. We have checked the innoDB usage from the monitoring system. But the result was negative. It never utilized more than 40GB. That thickens the plot. If buffer pool only has 40 GB, who is eating all that memory?

Is this correct? Does Buffer Pool only hold 40GB?

What’s Inside the BufferPool and whats its size?

SELECT
page_type AS page_type,
sum(data_size) / 1024 / 1024 AS size_in_mb
FROM
information_schema.innodb_buffer_page
GROUP BY
page_type
ORDER BY
size_in_mb DESC; +-------------------+----------------+
| Page_Type | Size_in_MB |
+-------------------+----------------+
| INDEX | 39147.63660717 |
| IBUF_INDEX | 0.74043560 |
| UNDO_LOG | 0.00000000 |
| TRX_SYSTEM | 0.00000000 |
| ALLOCATED | 0.00000000 |
| INODE | 0.00000000 |
| BLOB | 0.00000000 |
| IBUF_BITMAP | 0.00000000 |
| EXTENT_DESCRIPTOR | 0.00000000 |
| SYSTEM | 0.00000000 |
| UNKNOWN | 0.00000000 |
| FILE_SPACE_HEADER | 0.00000000 |
+-------------------+----------------+

A quick guide about this query.

  • INDEX: B-Tree index
  • IBUF_INDEX: Insert buffer index
  • UNKNOWN: not allocated / unknown state
  • TRX_SYSTEM: transaction system data
Bonus:

To get the buffer pool usage by index

SELECT
table_name AS table_name,
index_name AS index_name,
count(*) AS page_count,
sum(data_size) / 1024 / 1024 AS size_in_mb
FROM
information_schema.innodb_buffer_page
GROUP BY
table_name, index_name
ORDER BY
size_in_mb DESC; Then where mysql was holding the Memory?

We checked all of the mysql parts where its utilizing memory. Here is a rough calculation for the memory utilization during the mysql crash.

BufferPool: 40GB
Cache/Buffer: 8GB
Performance_schema: 2GB
tmp_table_size: 32M
Open tables cache for 50 tables: 5GB
Connections, thread_cache and others: 10GB

Almost it reached 65GB, we can round it as 70GB out of 120GB. But still its approximate only. Something is wrong right? My DBA mind started to think where is the remaining?

Till now,
  1. MySQL is the culprit who is consuming all of the memory.
  2. Clearing OS cache never helped. Its fine.
  3. Buffer Pool is also in healthy state.
  4. Other memory consuming parameters are looks good.
It’s time to Dive into the MySQL.

Lets see what kind of queries are running into the mysql.

show global status like 'Com_%';
+---------------------------+-----------+
| Variable_name | Value |
+---------------------------+-----------+
| Com_admin_commands | 531242406 |
| Com_stmt_execute | 324240859 |
| Com_stmt_prepare | 308163476 |
| Com_select | 689078298 |
| Com_set_option | 108828057 |
| Com_begin | 4457256 |
| Com_change_db | 600 |
| Com_commit | 8170518 |
| Com_delete | 1164939 |
| Com_flush | 80 |
| Com_insert | 73050955 |
| Com_insert_select | 571272 |
| Com_kill | 660 |
| Com_rollback | 136654 |
| Com_show_binlogs | 2604 |
| Com_show_slave_status | 31245 |
| Com_show_status | 162247 |
| Com_show_tables | 1105 |
| Com_show_variables | 10428 |
| Com_update | 74384469 |
+---------------------------+-----------+

Select, Insert, Update these counters are fine. But a huge amount of prepared statements were running into the mysql.

One more Tip: Valgrind

Valgrind is a powerful open source tool to profile any process’s memory consumption by threads and child processes.

Install Valgrind: # You need C compilers, so install gcc wget ftp://sourceware.org/pub/valgrind/valgrind-3.13.0.tar.bz2
tar -xf valgrind-3.13.0.tar.bz2
cd valgrind-3.13.0
./configure
make
make install Note: Its for troubleshooting purpose, you should stop MySQL and Run with Valgrind.
  • Create an log file to Capture
touch /tmp/massif.out
chown mysql:mysql /tmp/massif.out
chmod 777 /tmp/massif.out
  • Run mysql with Valgrind
/usr/local/bin/valgrind --tool=massif --massif-out-file=/tmp/massif.out /usr/sbin/mysqld –default-file=/etc/my.cnf
  • Lets wait for 30mins (or till the mysql takes the whole memory). Then kill the Valgranid and start mysql as normal.
Analyze the Log: /usr/local/bin/ms_print /tmp/massif.out

We’ll explain mysql memory debugging using valgrind in an another blog.

Memory Leak:

We have verified all the mysql parameters and OS level things for the memory consumption. But no luck. So I started to think and search about mysql’s memory leak parts. Then I found this awesome blog by Todd.

Yes, the only parameter I didn’t check is max_prepared_stmt_count.

What is this?

From MySQL’s Doc,

This variable limits the total number of prepared statements in the server. It can be used in environments where there is the potential for denial-of-service attacks based on running the server out of memory by preparing huge numbers of statements.
  • Whenever we prepared a statement, we should close in the end. Else it’ll not the release the memory which is allocated to it.
  • For executing a single query, it’ll do three executions (Prepare, Run the query and close).
  • There is no visibility that how much memory is consumed by a prepared statement.
Is this the real root cause?

Run this query to check how many prepared statements are running in mysql server.

mysql> show global status like 'com_stmt%'; +-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Com_stmt_close | 0 |
| Com_stmt_execute | 210741581 |
| Com_stmt_fetch | 0 |
| Com_stmt_prepare | 199559955 |
| Com_stmt_reprepare | 1045729 |
| Com_stmt_reset | 0 |
| Com_stmt_send_long_data | 0 |
+-------------------------+-----------+

You can see there are 1045729 prepared statements are running and the Com_stmt_close variables is showing none of the statements are closed.

This query will return the max count for the preparements.

mysql> show variables like 'max_prepared_stmt_count';
+-------------------------+---------+
| Variable_name | Value |
+-------------------------+---------+
| max_prepared_stmt_count | 1048576 |
+-------------------------+---------+

Oh, its the maximum value for this parameter. Then we immediately reduced it to 2000.

mysql> set global max_prepared_stmt_count=2000; -- Add this to my.cnf
vi /etc/my.cnf [mysqld]
max_prepared_stmt_count = 2000

Now, the mysql is running fine and the memory leak is fixed. Till now the memory utilization is normal. In Laravel framework, its almost using this prepared statement. We can see so many laravel + prepare statements questions in StackOverflow.

SYS Schema in 5.7:

In MySQL 5.7 the sys schema provides all the informations about the MySQL’s memory Utilization.

# Credits: lefred.be SELECT SUBSTRING_INDEX(event_name,'/',2) AS code_area,
sys.format_bytes(SUM(current_alloc)) AS current_alloc
FROM sys.x$memory_global_by_current_bytes
GROUP BY SUBSTRING_INDEX(event_name,'/',2)
ORDER BY SUM(current_alloc) DESC; Conclusion:

The very important lesson as a DBA I learned is, before setting up any parameter value check the consequences of modifying it and make sure it should not affect the production anymore. Now the mysql side is fine, but the application was throwing the below error.

Can't create more than max_prepared_stmt_count statements (current value: 20000)

To continue about this series, the next blog post will explain how we fixed the above error using multiplexing and how it helped to dramatically reduce the mysql’s memory utilization.

How max_prepared_stmt_count bring down the production MySQL system was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Web Technologies

On MySQL XA Transactions

Planet MySQL - Sat, 10/06/2018 - 08:59
One of the features I missed in my blog post on problematic MySQL features back in July is XA transactions. Probably I was too much in a hurry, as this feature is known to be somewhat buggy, limited and not widely used outside of Java applications. My first related feature request, Bug #40445 - "Provide C functions library implementing X/OPEN XA interface for Bea Tuxedo", was created almost 10 years ago, based on the issue from one of MySQL/Sun customers of that time. I remember some internal debates on how much time and efforts the implementation may require, but the decision was not made anyway, and one still can not directly integrate MySQL with Tuxedo transaction manager (that the idea of XA transactions originally came from). It's even more funny to see that feature request still just "Verified" when taking into account the fact that BEA Tuxedo software is Oracle's software since... 2008.

XA Transactions support is a useful MySQL feature, but I wonder if one day it may just become abandoned as that West Pier in Brighton, or overwhelmed with many small bugs in a same way as this stairs to the beach in Hove...
But maybe XA transactions are not widely used and nobody cares much about them?

Let me try to do a quick review of related active bug reports and feature requests before making any conclusions:
  • Bug #91702 - "Feature Request: JOIN support for XA START command". This feature request was added less than 3 months ago and is still "Open". It means there are users interested in this feature, but Oracle engineers do not care much even to verify related requests, even less - to give them some priority. 
    See also Bug #78498 - "XA issue or limitation with 5.6.19 engine", reported 3 years ago, that is essentially about the same limitation. As bug reporter explained:
    "... it prevents us to use MySQL with Weblogic on 2 phase commit scenarii..."
  • Yet another example of a request ignored for a long time is Bug #90659 - "implicit commit and unlocking with xa start", that is about the inconsistency of current implementation. Even less (as we already know) they care about XA support outside of Java as one can conclude from the fact that Connector/Net related request, Bug #70587 - "Dot Net Distributed transaction not supported in MySql Server 5.6", had not got any attention since July, 2015...
  • Bug #91646 - "xa command still operation when super_read_only is true". This bug was reported in July by Zhenghu Wen. It seems nobody cares much about XA transactions integration when new features are added to MySQL server.
  • Bug #89860 - "XA may lost prepared transaction and cause different between master and slave". This bug reported by Michael Yang (See also Bug #88534) sounds really serious and was previously reported by Andrei Elkin (who works for MariaDB now) as Bug #76233 - "XA prepare is logged ahead of engine prepare". See also Bug #87560 - "XA PREPARE log order error in replication and binlog recovery" by Wei Zhao, who also contributed a patch. See also Bug #83983 - "innodb fail to recover the prepared xa transaction" (the bug reported by Dennis Gao is still "Open", while it's clearly related to or is a duplicate of "Verified" bugs mentioned above).
    So many related/duplicate problem reports, but no fix so far!
  • Bug #88748 - "InnoDB: Failing assertion: trx->conc_state == 1". This assertion failure was reported by Roel Van de Paar back in December, 2017. See also his Bug #84754 - "oid String::chop(): Assertion `strlen(m_ptr) == m_length' failed."
    I noted that Oracle recently invented new "low" severity levels, and this bug is S6 (Debug Builds). I do not really agree that assertions in debug builds are of so low severity - they are in the code for a reason, to prevent crashes in non-debug builds and all kinds of inconsistencies.
  • Bug #87526 - "The output of 'XA recover convert xid' is not useful". This bug reported by Sveta Smirnova caused a lot of troubles to poor users with prepared transactions hanging around for weeks after crash, as it prevented any easy way to get rid of them (and related locks) in some cases. The bug is still "Verified" in MySQL and "On hold" in Percona Server, while MariaDB fixed it in 10.3, see MDEV-14593.
  • Bug #87130 - "XA COMMIT not taken as transaction boundary". Yet another bug report with a patch from Wei Zhao.
  • Bug #75205 - "Master should write a LOST_EVENTS entry on xa commit after recovery." Daniël van Eeden reported this at early 5.7 pre-GA stage, and manual explains now that:
    "In MySQL 5.7.7 and later, there is a change in behavior and an XA transaction is written to the binary log in two parts. When XA PREPARE is issued, the first part of the transaction up to XA PREPARE is written using an initial GTID. A XA_prepare_log_event is used to identify such transactions in the binary log. When XA COMMIT or XA ROLLBACK is issued, a second part of the transaction containing only the XA COMMIT or XA ROLLBACK statement is written using a second GTID. Note that the initial part of the transaction, identified by XA_prepare_log_event, is not necessarily followed by its XA COMMIT or XA ROLLBACK, which can cause interleaved binary logging of any two XA transactions. The two parts of the XA transaction can even appear in different binary log files. This means that an XA transaction in PREPARED state is now persistent until an explicit XA COMMIT or XA ROLLBACK statement is issued, ensuring that XA transactions are compatible with replication."but the bug report is still "Verified".
    By the way, the need to deal with such prepared transactions recovered from the binary log caused problems like those listed above (with XA RECOVER CONVERT and order of preparing in the binary log vs engines that support XA...
  • Bug #71351 - "select hit query cache after xa commit, no result return". This bug probably affects only MySQL 5.5, so no wonder it's ignored now. Nobody tried to fix it while MySQL 5.5 was still supported, though.
There are some more bugs originally filed in other categories, but still related to XA:
  • Bug #72036 - "XA isSameRM() shouldn't take database into account". This Connecotr/J bug was reported in 2014 by Jess Balint.
  • Bug #78050 - "Crash on when XA functions activated by a storage engine". It happens when binary log not enabled. This bug was reported by Zhenye Xie, who also contributed a patch later. Still this crashing bug remains "Verified".
  • Bug #87385 - "Partial external XA transactions are not rolled back correctly". Yet another bug report with a patch from Wei Zhao. See also his Bug #87389 - "Replication position not persisted correctly for XA transactions".
  • Bug #91633 - "Replication failure (errno 1399) on update in XA tx after deadlock". This bug reported by Lukas Sydorowski got recent comment from other community member yesterday. So, the feature is used these days, still.
Now time for conclusions:
  1. Take extra care while using XA transactions in replication environments or with point in time recovery - you may easily end up with slaves out of sync with master and data lost.
  2. Feature requests related to XA transactions are mostly ignored, sometimes for a decade... 
  3. Patches contributed do not seem to speed up XA bug fixing.
  4. I'd say that Oracle does not care much about XA Transactions since MySQL 5.7 GA release in 2015.
  5. MySQL Community still use XA transactions with MySQL (and they will be used even more as corporate users migrate from Oracle RDBMS), find bugs and even try to fix them. But probably will have to use forks rather than MySQL itself if current attitude towards XA bugs processing and fixing remains.
Categories: Web Technologies

Simplified Angular unit testing

Echo JS - Sat, 10/06/2018 - 05:37
Categories: Web Technologies

React global state by Context API

Echo JS - Sat, 10/06/2018 - 05:37
Categories: Web Technologies

Material Design Animation Guides

CSS-Tricks - Fri, 10/05/2018 - 15:53

I've seen two guides posted to Medium about animation in the last month that have seriously blown up!

There is a lot to learn in each one! The demonstration animations they use are wonderfully well done and each guide demonstrates an interesting and effective animation technique, often paired next to a less successful technique to drive the point home. They are both heavily focused on Material Design though, which is fine, but I think Val Head said it best:

Google wrote material design for branding Google things. When you use material design on things that aren’t Google, you’re kind of using Google’s branding on a thing that is not Google, and that’s weird. Material design is Google’s opinion on motion. It’s Google’s branding opinion on motion. It’s not a de facto standard of how motion should happen.

The post Material Design Animation Guides appeared first on CSS-Tricks.

Categories: Web Technologies

Pages